Distribuir contenido

The FSF is hiring: Seeking a full-time outreach and communication coordinator

FSF - Mar, 06/16/2015 - 20:25

The Free Software Foundation (FSF), a Boston-based 501(c)(3) charity with a world wide mission to protect freedoms critical to the computer-using public, seeks a motivated and organized tech-friendly Boston-based individual to be its full-time outreach and communication coordinator.

This position, reporting to the executive director, works closely with our campaigns, licensing, and technical staff, as well as our board of directors, to edit, publish, and promote high-quality, effective materials both digital and printed.

These materials are a critical part of advancing the FSF's work to support the GNU Project, free software adoption, free media formats, and freedom on the Internet; and to oppose DRM, software patents, and proprietary software.

Some of the position's more important responsibilities include:

  • stewarding the online publication and editing process for all outreach staff; including copyediting, formatting, posting, and maintaining material on our Web sites; and sending out e-mail messages to our lists;

  • producing and improving our monthly e-mail newsletter the Free Software Supporter;

  • improving the effectiveness of our audio and video materials use;

  • editing and building our biannual printed Bulletin;

  • promoting our work and the work of others in the area of computing freedom on social networking sites;

  • helping to produce fundraising materials and assisting with our fundraising drives;

  • cultivating the community around the LibrePlanet wiki and network, including the annual conference;

  • working with and encouraging volunteers; and

  • being an approachable, humble, and friendly representative of the FSF to our worldwide community of existing supporters and the broader public, both in person and online.

A successful candidate will have strong editing skills, especially in the area of copyediting, and will take pride in working with a team to create consistently polished and effective materials.

While this is a job for a person who is passionate about technology and its social impact, it is not primarily a technical position. The main technical requirement is a willingness to learn to use many new and possibly unfamiliar pieces of software, with a positive attitude. That being said, experience with CiviCRM and GNU/Linux will be considered a big plus, and experience with any of the following technologies should be mentioned: Plone, Drupal, Ikiwiki, Subversion, Git, CVS, Ssh, JavaScript, CSS, HTML, Emacs, LaTeX, Inkscape, GIMP, Markdown, or MediaWiki.

Because the FSF works globally and seeks to have our materials distributed in as many languages as possible, multilingual candidates will be noticed. English, German, French, Spanish, Mandarin, Malagasy, and a smattering of Japanese are represented among current FSF staff.

With our small staff of twelve, each person makes a clear contribution. We work hard, but offer a humane and fun work environment.

Benefits and salary

The job must be worked on-site at the FSF's office in downtown Boston.

This is a union position. The salary is fixed at $51,646.40 and is non-negotiable. Other benefits include:

  • full family health coverage through Blue Cross/Blue Shield's HMO Blue program,
  • subsidized dental plan,
  • four weeks of paid vacation annually,
  • seventeen paid holidays annually,
  • public transit commuting cost reimbursement,
  • 403(b) program through TIAA-CREF,
  • yearly cost-of-living pay increases, and
  • potential for an annual performance bonus.
Application instructions

Applications must be submitted via email to hiring@fsf.org. The email must contain the subject line, "Outreach and Communications Coordinator". A complete application should include:

  • resume,
  • cover letter,
  • writing sample (1000 words or less),
  • links to published work online, and
  • three or more edits you would suggest to this job posting.

All materials must be in a free format (such as plain text, PDF, or OpenDocument, and not Microsoft Word). Email submissions that do not follow these instructions will probably be overlooked. No phone calls, please.

Applications will be considered on a rolling basis until the position is filled. To ensure consideration, apply before 10:00am EST on Wednesday, July 1st.

The FSF is an equal opportunity employer and will not discriminate against any employee or application for employment on the basis of race, color, marital status, religion, age, sex, sexual orientation, national origin, handicap, or any other legally protected status recognized by federal, state or local law. We value diversity in our workplace.

Categorías: Software Libre

Free Software Foundation announces Deputy Director search

FSF - Mié, 05/13/2015 - 21:56

The Free Software Foundation (FSF), a Boston-based 501(c)(3) charity with a worldwide mission to protect freedoms critical to the computer-using public, would love to find an experienced, Boston-based Deputy Director to expand our leadership team.

This new position would work closely in support of the executive director to coordinate and amplify the work of an expanding, 12-person staff; represent the FSF to conference, supporter, and donor audiences internationally; and play a key role in improving the FSF's overall effectiveness by driving initiative prioritization, fundraising, resource allocation, hiring, and internal process development.

Now is an especially exciting time to join the FSF team, since this year is our 30th anniversary. We are taking the opportunity to both reflect on the past and plan ahead for the next 30 years.

In addition to being a talented general manager and project coordinator, the right candidate will bring significant expertise to at least one of the FSF's major work areas -- technology infrastructure and software development, licensing and compliance, public advocacy and engagement, fundraising, or operations.

This role is for someone who:

  • is a dedicated free software user;
  • cares deeply about the impact of control over technology on the exercise of individual freedoms;
  • stays highly organized, even during high-stress situations,
  • inspires and motivates others;
  • is a reliably rational, diplomatic, and productive voice in discussions, both online and offline;
  • loves puzzles and problem-solving; and
  • enjoys the challenges of working in the public eye, including fielding and responding to criticisms.

Because of financial control duties, the position must be worked from the FSF's headquarters in Boston, Massachusetts. Relocation assistance is available. Candidates currently located outside the US may apply; we have sponsored visas in the past.

Salary would be commensurate with experience. Benefits include:

  • full family health coverage through Blue Cross/Blue Shield's HMO Blue program,
  • subsidized dental plan,
  • four weeks of paid vacation annually,
  • seventeen paid holidays annually,
  • public transit commuting cost reimbursement,
  • 403(b) program through TIAA-CREF,
  • a shiny silver Deputy star,
  • yearly cost-of-living pay increases, and
  • potential for an annual performance bonus.

Applications must be submitted via email to hiring@fsf.org. The email must contain the subject line "Deputy Director". A complete application should include:

  • resume or CV,
  • cover letter,
  • writing sample (1000 words or less), and
  • links to published work online, such as articles, code contributions, or conference presentation videos.

All materials must be in a free format. Email submissions that do not follow these instructions will probably be overlooked. No phone calls, please.

Applications will be evaluated on a rolling basis.

The FSF is an equal opportunity employer and will not discriminate against any employee or application for employment on the basis of race, color, marital status, religion, age, sex, sexual orientation, national origin, handicap, or any other legally protected status recognized by federal, state, or local law. We value diversity in our workplace.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Categorías: Software Libre

The FSF is hiring: Seeking a Boston-area full-time Web Developer

FSF - Jue, 05/07/2015 - 21:25

This position, reporting to the executive director, works closely with our sysadmin team to maintain and improve the FSF's Web presence. It's an especially exciting time to join the FSF team, because we will be celebrating our 30th anniversary this October.

The FSF uses several different free software web platforms in the course of its work, both internally and externally. These platforms are critical to work supporting the GNU Project, free software adoption, free media formats, and freedom on the Internet; and to opposing bulk surveillance, Digital Restrictions Management, software patents, and proprietary software.

We are looking for someone who is primarily interested in keeping these systems up-to-date and working, as well as customizing them when necessary. While the main duties will relate to the backend systems, frontend experience with templates, HTML, CSS, JavaScript, and design tools will be a big plus.

The Web Developer will also contribute to decisions about which new platforms to use or which existing ones to retire. The infrastructure of www.fsf.org, shop.fsf.org, and audio-video.gnu.org will likely be changed this year, so there will be some critically important research and work to be done right away.

We emphasize opportunities to contribute work done at the FSF to the upstream projects we use, to benefit the broader free software community.

You'll primarily work with:

  • CiviCRM
  • Drupal
  • MediaWiki
  • Plone / Zope
  • Ikiwiki
  • Request Tracker
  • Django / Satchmo
  • Etherpad
  • CAS
  • GNU social
  • GNU MediaGoblin

Because the FSF works globally and seeks to have our materials distributed in as many languages as possible, multilingual candidates will have an advantage. English, German, French, Spanish, Mandarin, Malagasy, and a little Japanese, are represented among current FSF staff.

With our small staff of twelve, each person makes a clear contribution. We work hard, but offer a humane and fun work environment at an office located in the heart of downtown Boston.

The FSF is a mature but growing organization that provides great potential for advancement; existing staff get the first chance at any new job openings. This position is a great starting point for anyone who might be interested in other roles on our technical team in the future.

Benefits and salary

The job must be worked on-site at FSF's downtown Boston office. An on-site interview will be required with the executive director and other team members.

This job is a union position. The salary is fixed at $51,646.40 annually. Other benefits include:

  • conference travel opportunities,
  • full family health coverage through Blue Cross/Blue Shield's HMO Blue program,
  • subsidized dental plan,
  • four weeks of paid vacation annually,
  • seventeen paid holidays annually,
  • public transit commuting cost reimbursement,
  • 403(b) program through TIAA-CREF,
  • yearly cost-of-living pay increases, and
  • potential for an annual performance bonus.
Application instructions

Applications must be submitted via email to hiring@fsf.org. The email must contain the subject line "Web Developer". A complete application should include:

  • resume,
  • cover letter, and
  • links to any previous work online.

All materials must be in a free format (such as plain text, PDF, or OpenDocument, and not Microsoft Word). Email submissions that do not follow these instructions will probably be overlooked. No phone calls, please.

Applications will be reviewed on a rolling basis until the position is filled. To guarantee consideration, submit your application by Wednesday, May 27th, 10:00AM EDT.

The FSF is an equal opportunity employer and will not discriminate against any employee or application for employment on the basis of race, color, marital status, religion, age, sex, sexual orientation, national origin, handicap, or any other legally protected status recognized by federal, state or local law. We value diversity in our workplace.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Categorías: Software Libre

Community is the focus of 2015's International Day Against DRM

FSF - Mar, 05/05/2015 - 23:40

The groups are united in envisioning a world without Digital Restrictions Management (DRM), technology that places arbitrary restrictions on what people can do with digital media, often by spying on them. As the largest anti-DRM event in the world, the International Day Against DRM is an important counterpoint to the pro-DRM message broadcast by powerful media and software companies. The Day is coordinated by Defective by Design, the anti-DRM campaign of the Free Software Foundation.

This year, community members are the highlight of the Day. Activists have organized twelve events in Bangladesh, Canada, England, Guatemala, Italy, the Netherlands, Portugal, the US, and Greece (as of May 5th).

Events in at least nine countries. See dayagainstdrm.org for the most up-to-date list.

Four individuals with unique perspectives also worked with Defective by Design to write community posts: two blind anti-DRM activists, an anti-DRM tech librarian, and a social scientist/activist analyzing the rise of DRM in streaming media services.

Bookstores and publishers, including O'Reilly Media, are offering sales on DRM-free media and advocacy organizations allied with Defective by Design will also be making official statements. Activists in Russia, Romania, and France have already translated the anti-DRM flyer into their native languages, and more translations are in progress. More groups are expected to join on the day itself.

Zak Rogoff, campaigns manager for the Free Software Foundation, said "Powerful entertainment and technology companies use DRM to restrict our use of digital media, demanding control over our computers and network connections in the process. Our community is doing everything we can to organize and build tools to protect our freedom. Our opponents are strong enough to have the government on their side in most countries, but when we come together, we are strong too."

Individuals can participate with a variety of online and in-person actions on dayagainstdrm.org, from media downloads to gatherings. To be part of Defective by Design's year-round anti-DRM campaigns, supporters can join the low-volume Action Alerts email list or join the discussion on the email discussion list or #dbd IRC channel. Media stores, activist organizations and other groups interested in participating in the International Day Against DRM today or in 2016 should contact info@defectivebydesign.org.

About Defective By Design

Defective by Design is the Free Software Foundation's campaign against Digital Restrictions Management (DRM). DRM is the practice of imposing technological restrictions that control what users can do with digital media, creating a good that is defective by design. DRM requires the use of proprietary software and is a major threat to computer user freedom. It often spies on users as well. The campaign, based at defectivebydesign.org, organizes anti-DRM activists for in-person and online actions, and challenges powerful media and technology interests promoting DRM. Supporters can donate to the campaign at https://crm.fsf.org/civicrm/contribute/transact?reset=1&id=40.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

Media Contact

Zak Rogoff
Campaigns Manager
Free Software Foundation
(202) 489-6887
campaigns@fsf.org

###

Categorías: Software Libre

LibrePlanet 2015 brings free software luminaries to MIT

FSF - Mar, 03/24/2015 - 22:40

Richard Stallman gave the opening keynote

At a ceremony on Saturday, March 21st, Free Software Foundation executive director John Sullivan announced the winners of the FSF's annual Free Software Awards. Two awards were given: the Award for the Advancement of Free Software was presented to Sébastien Jodogne for his work on free software medical imaging, and the Award for Projects of Social Benefit was presented to Reglue, an Austin, TX organization that gives GNU/Linux laptops to families in need.

Software Freedom Conservancy executive director Karen Sandler closed out the conference with a rallying cry to "Stand up for the GNU GPL," in which she discussed a lawsuit recently filed in Germany to defend the GNU General Public License. When she asked the audience who was willing to stand up for copyleft, the entire room rose to its feet.

Karen Sandler gave the closing keynote

Videos of all the conference sessions, along with photographs from the conference, will soon be available on https://media.libreplanet.org, the conference's instance of GNU MediaGoblin, a free software media publishing platform that anyone can run.

LibrePlanet 2015 was produced in partnership by the Free Software Foundation and the Student Information Processing Board (SIPB) at MIT.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Libby Reinish
Campaigns Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

Categorías: Software Libre

Spherical Stereoscopic Panoramas

Blender - Lun, 03/23/2015 - 12:16

This week I visited the Blender Institute and decided to wrap up the multiview project. But since I had an Oculus DK2 with me I decided to patch multiview to support Virtual Reality gadgets.

Gooseberry Benchmark viewed with an Oculus DK2

There is something tricky about them. You can’t just render a pair of panoramas and expect them to work. The image would work great for the virtual objects in front of you, but it would have the stereo eyes swapped when you look at behind you.

How to solve that? The technique is the same one as presented in the 3D Fulldome Teaser. We start by determining an interocular distance and a convergence distance based on the stereo depth we want to convey. From there Cycles will rotate a ‘virtual’ stereo camera pair for each pixel to be rendered, so that both cameras’ rays converge at the specified distance. The zero parallax will be experienced at the convergence distance.

Oculus barrel correction screen shader applied to a view inside the panorama

This may sound complicated, but it’s all done under the hood. If you want to read more about this technique I recommend this paper from Paul Bourke on Synthetic stereoscopic panoramic images. The paper is from 2006 so there is nothing new under the Sun.

If you have an Oculus DK2 or similar device, you can grab the final image below to play with. I used Whirligig to visualize the stereo panorama, but there are other alternatives out there.

Top-Bottom Spherical Stereo Equirectangular Panorama - click to save the original image

This image was generated with a spin off branch of multiview named Multiview Spherical Stereo. I’m still looking for a industry standard name for this method – “Omnidirectional Stereo” is a strong contender.

I would also like to remark the relevance of Open projects such as Gooseberry. The always warm-welcoming Gooseberry team just released their benchmark file, which I ended up using for those tests. To be able to get a production quality shot and run whatever multi-vr-pano-full-thing you may think of is priceless.

Builds

If you want to try to render your own Spherical Stereo Panoramas, I built the patch for the three main platforms.

* Don’t get frustrated if the links are dead. As soon as this feature is officially supported by Blender I will remove them. So if that’s the case, get a new Blender.

How to render in three steps
  1. Enable ‘Views’ in the Render Layer panel
  2. Change camera to panorama
  3. Panorama type to Equirectangular

And leave ‘Spherical Stereo’ marked (it’s on by default at the moment).

Last and perhaps least is the small demo video above. The experience of seeing a 3D set doesn’t translate well for the video. But the overall impression from the Gooseberry team was super positive.

Also, this particular feature was the exact reason I was moved towards implementing multiview in Blender. All I wanted was to be able to render stereo content for fulldomes with Blender. In order to do that, I had to design a proper 3D stereoscopic pipeline for it.

What started as a personal project in 2013 ended up being embraced by the Blender Foundation in 2014, which supported me for a 2-month work period at the Blender Institute via the Development Fund. And now in 2015, so close to the Multiview completion, we finally get the icing on the cake.

No, wait … the cake is a lie!

Links
  • Multiview Spherical Stereo branch [link] *
  • Gooseberry Production Benchmark File [link]
  • Support the Gooseberry project by signing up in the Blender Cloud [link]
  • Support further Blender Development by joining the Development Fund [link]

* If the branch doesn’t exist anymore, it means that the work was merged into master.

What is next?

Multiview is planned to be merged in master very soon, in time for Blender 2.75. The Spherical Panorama was not intended as one of the original features, but if we can review it in time it will go there as well.

I would like to investigate if we may need other methods for this techniques. For instance, this convergence technique is the equivalent of ‘Toe-In’ for perspective panorama. We could support ‘Parallel’ convergence as well, but ‘Off-Axis’ seems to not fit here. It would be interesting to test the final image in different devices.

If you manage to test it yourself, do post your impressions in the comment section!

Wordpress always remove the video link when the content is edited in 'Visual' mode. Copy+Paste the link below if that happens.


–>

Categorías: Diseño 3D

Sébastien Jodogne, ReGlue are Free Software Award winners

FSF - Dom, 03/22/2015 - 00:05

The Award for the Advancement of Free Software is given annually to an individual who has made a great contribution to the progress and development of free software, through activities that accord with the spirit of free software.

This year, it was given to Sébastien Jodogne for his work on free software medical imaging with his project Orthanc.

One of Jodogne's nominators said, "The Orthanc project started in 2011, when Sébastien noticed in his work as a medical imaging engineer that hospitals are very exposed to lock-in problems when dealing with their medical imaging flows....Freely creating electronic gateways between imaging modalities (autorouting), between medical departments, or even between hospitals remains a challenging task. But the amount of medical images that are generated, analyzed, and exchanged by hospitals is dramatically increasing. Medical imaging is indeed the first step to the treatment of more and more illnesses, such as cancers or cardiovascular diseases."

Jodogne said, "Technology and humanism are often opposed. This is especially true in the healthcare sector, where many people fear that technological progress will dehumanize the treatments and will reduce the patients to statistical objects. I am convinced that the continuous rising of free software is a huge opportunity for the patients to regain control of their personal health, as well as for the hospitals to provide more competitive, personalized treatments by improving the interoperability between medical devices. By guaranteeing the freedoms of the users, free software can definitely bring back together computers and human beings."

Jodogne joins a distinguished list of previous winners, including the 2013 winner, Matthew Garrett.

The Award for Projects of Social Benefit is presented to a project or team responsible for applying free software, or the ideas of the free software movement, in a project that intentionally and significantly benefits society in other aspects of life. This award stresses the use of free software in the service of humanity.

This year, the award went to Reglue, which gives GNU/Linux computers to underprivileged children and their families in Austin, TX. According to Reglue, Austin has an estimated 5,000 school-age children who cannot afford a computer or Internet access. Since 2005, Reglue has given over 1,100 computers to these children and their families. Reglue's strategy diverts computers from the waste stream, gives them new life with free software, and puts them in the hands of people who need these machines to advance their education and gain access to the Internet.

One nomination for Reglue read, "Mr. Starks has dedicated his life to distributing free software in many forms, both the digital form...and by building new computers from old parts, giving a new life to old machines by re-purposing them into computers given to extremely needy children and families. They are always loaded with free, GNU/Linux software, from the OS up."

Ken Starks, founder of Reglue, was present at the ceremony to accept the award. While all free 'as in freedom' software is not free of charge, Reglue focuses on finding empowering free software that is also gratis. He said of his work with Reglue, "A child's exposure to technology should never be predicated on the ability to afford it. Few things will eclipse the achievements wrought as a direct result of placing technology into the hands of tomorrow."

Nominations for both awards are submitted by members of the public, then evaluated by an award committee composed of previous winners and FSF founder and president Richard Stallman. This year's award committee was: Hong Feng, Marina Zhurakhinskaya, Yukihiro Matsumoto, Matthew Garrett, Suresh Ramasubramanian, Fernanda Weiden, Jonas Öberg, Wietse Venema, and Vernor Vinge.

More information about both awards, including the full list of previous winners, can be found at https://www.fsf.org/awards.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software—particularly the GNU operating system and its GNU/Linux variants—and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

John Sullivan
Executive Director
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

Photos under CC BY-SA 4.0 Attribution

Categorías: Software Libre

Kat Walsh joins FSF board of directors

FSF - Sáb, 03/21/2015 - 17:20

The full list of FSF board members, including biographies can be found at https://www.fsf.org/about/staff-and-board.

"Seeing how Kat Walsh has championed software freedom in other organizations, she is a natural choice for the FSF board," said FSF president Richard M. Stallman.

A lawyer with extensive background in the free culture movement, Walsh brings a wealth of experience with law and licensing to the FSF board. In particular, her skills will help support and oversee the FSF's licensing work on the GNU General Public License (GPL) as well as the LGPL and GFDL. Kat worked as a staff lawyer at Creative Commons, where she was on the team that drafted the last major revision to the family of Creative Commons licenses, completed in November 2013 with the release of the 4.0 licenses.

Walsh also brings a deep understanding of non-profit management. An active contributor to Wikipedia, Walsh was elected to the board of directors of the Wikimedia Foundation (WMF) for three terms between 2006 and 2013 and served as the organization's chair from 2012 to 2013. During her tenure on the board, she helped oversee the organization's growth from a staff of 3 to over 150. In 2005, the FSF awarded Wikipedia the first ever Free Software Award for Projects of Social Benefit, which is presented annually to the project or team responsible for applying free software, or the ideas of the free software movement, in a project that intentionally and significantly benefits society in other aspects of life.

FSF board member Benjamin Mako Hill said, "As a WMF advisory board member since 2007, I have worked with Kat extensively and have seen her deep commitment to free software firsthand. Kat's consistent and clear advocacy for free software, free documentation, and free media formats in the Wikipedia community and the Wikimedia organization has played an important role in Wikimedia's strong defense of free software and its advocacy of free software principles more broadly. I am thrilled she will bring that commitment and passion to the FSF board."

FSF executive director John Sullivan said, "In addition to her commitment to free software, Kat's deep experience in nonprofit management and her leadership in licensing bring important skills to the FSF board. Kat has been an FSF associate member and supporter for many years and we are excited that she agreed to step into a leadership position within our organization and movement."

Walsh is a member of the Virginia State Bar and the US Patent Bar, and holds a JD from George Mason University. On accepting the invitation to join the board, Walsh said, "I'm honored to join the leadership of this organization—the FSF's work and principles support a free society by enabling individuals to control the software that is an increasing part of everyone's life, particularly as the consequences of losing that control—particularly loss of privacy and freedom of speech—become greater. I look forward to using my skills to help advance its mission."

The announcement was made at LibrePlanet, a conference organized by the FSF and MIT's SIPB that is being held this weekend in Cambridge, Massachusetts. LibrePlanet has been held annually since 2009 and brings together participants from around the world for talks and events related to the broader free software movement. Walsh, who has attended every LibrePlanet meeting, was in attendance for the announcement. About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software—particularly the GNU operating system and its GNU/Linux variants—and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

John Sullivan
Executive Director
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

Categorías: Software Libre

LibrePlanet free software conference coming to MIT March 21-22

FSF - Mar, 03/10/2015 - 22:10

Organized around the theme "Free Software Everywhere," the conference's sessions touch on the many places and ways in which free software is used around the world, as well as ways to make free software ubiquitous. Keynote speakers include Free Software Foundation founder Richard Stallman, Software Freedom Conservancy executive director Karen Sandler, and University of Washington professor Benjamin Mako Hill.

This year's LibrePlanet conference will feature over 30 sessions, such as Attribution revolution -- turning copyright upside-down, Fighting surveillance with a free, distributed, and federated net, and Librarians fight back: free software solutions for digital privacy, as well as a hands-on workshop showing participants how to replace even the low-level proprietary software on laptops with something that respects their freedom.

"If you're bothered by the loss of control over your computer and cell phone and all your digital information, and want to know what you can do about it, come to LibrePlanet. The LibrePlanet program is full of presenters who are working from a variety of disciplines to protect our freedom, privacy, and security as computer users," said Libby Reinish, a campaigns manager at the Free Software Foundation.

Online registration for LibrePlanet 2015 is now open; attendees may also register in person at the event.

About LibrePlanet

LibrePlanet is the annual conference of the Free Software Foundation, and is co-produced by the Student Information Processing Board. What was once a small gathering of FSF members has grown into a larger event for anyone with an interest in the values of software freedom. LibrePlanet is always gratis for associate members of the FSF. To sign up for announcements about LibrePlanet 2015, visit https://www.libreplanet.org/2015.

LibrePlanet 2014 was held at MIT from March 22-23, 2014. Over 350 attendees from all over the world came together for conversations, demonstrations, and keynotes centered around the theme of "Free Software, Free Society." You can watch videos from past conferences at http://media.libreplanet.org.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contact

Libby Reinish
Campaigns Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

Categorías: Software Libre

More Dependency Graph Tricks

Blender - Mié, 03/04/2015 - 00:12

The new dependency graph enables several corner cases that were not possible in the old system, In part by making evaluation finer grained – and in part by enabling driving from new datablocks. A nice image to illustrate this is the data block popup in the driver editor:

In the previous image, the highlighted menu item is the only option that is guaranteed to update in current Blender. While testing and development is still very much a work in progress, the future is that all or most of those menu items would become valid driver targets. I’m in progress of testing and submitting to Sergey examples of what works and what doesn’t – this is going to be a moving target until the refactor is complete.

The two examples in this post are based on some of the new working features:

Driving from (shape) key blocks leads to amazing rigging workflow

That weird little icon in the menu above with a cube and key on it that just says ‘Key’ is the shapekey datablock, that stores all the shapekeys in a mesh. And here’s the insanity: you can now use a shapekey to drive something else? Why the heck is that cool, you ask? Well, for starters, it makes setting up correction shapes really, really easy.

Correction shapes here means those extra shapes one makes to make the combination of two other shapes palatable. For instance, if you combine the ‘smile’ and ‘open’ shapes for Proog’s mouth, you get a weird thing that looks almost like a laugh, but not quite, and distorts some of the vertices in an unphysical way. The typical solution is to create a third shape ‘smile+open’ that tweaks those errors and perfects the laughing shape. The great thing about the new depsgraph, is you can drive this shape directly from the other two effectively making a ‘smart’ mesh that behaves well regardless of how it is rigged. If you are curious about this, check out the workflow video below:

Finer Granularity Dependency Graph Tricks

The finer granularity of the Dependency graph lets us work around potential dependency cycles that would trip up the old object based system and make usable rig setups. Once such setup is at least sometimes called the ‘Dorito Method’ for reasons I have not been able to discern.
The goal of the setup is to deform the mesh using shapekeys, and then further enable small tweaks with deforming controls – an armature. The trick is, to make these controls ‘ride’ with the mesh + shapekeys, effectively a cycle (mesh->bone->mesh) but not really, because the first ‘mesh’ in that sequence is only deformed with shapekeys.
The trick to fix the above cycle is to duplicate the meshes: (mesh1->bone->mesh2) where mesh1 has the shapekeys and mesh2 is deformed by the bone. The sneaky bit is that both mesh objects are linked meshes, so they share the shapekey block.
The problem with blender before the dependency refactor, is that everything works *except* driving the shapes and the deforms from the same armature. This was due to the object only limitation of the dependency graph. Now that we have finer granularity (at least in the depsgraph_refactor branch) this problem is completely solved!

Since this is a tricky method, I’ve got some more documentation about it after the jump

  1. The above image is an exploded view; in the blend, all three objects (the rig and the two meshes) would be in the same location.
  2. The two meshes are linked-data objects. They share the same shapekeys, hence the same shapkey drivers.
  3. The bone on the right has a custom property that drives the shapekeys, deforming both meshes.
  4. The larger green bone and the square-shaped bone deform the topmost mesh via an armature deform
  5. The lower green bone copies the location of a vertex in the original mesh (child-of would be even more forgiving) This is not a cycle since the lower mesh is not deformed by the armature.
  6. The visible red control is a child of that bone
  7. The larger green bone (the deformer) has a local copy location to the visible red control

This could be simplified somewhat by adding a child of constraint directly to the controller (targeting the original mesh shapekey) but I prefer not to constrain animator controls.
If you were to attempt this in 2.73 or the upcoming 2.74 it would fail to update reliably unless you split out the bone that drives the shapekey into its own armature object. This has to do with the course-grained dependency graph in 2.74, which only looks at entire objects. The downside (and the upside of the dependency graph) is that you would end up with two actions for animating your character instead of one (bleh) or you might have difficulties with proxies and linked groups.
Some reference links below:

Further thoughts

If we had some kind of hypothetical “Everything Nodes” we could implement this kind of setup without duplicating the mesh, indeed, without having redundant parent and child bones – the 3D setup would be quite simple, and the node setup would be less hackish and more clear about why this is not a dependency. I’ve made a hypothetical ‘everything nodes’ setup below, to illustrate what the dependencies actually are. In a real system, it’s quite likely you’d represent this with two node trees, one for the rig object, and one for the actual mesh deformation.

Categorías: Diseño 3D

Animation System Roadmap – 2015 Edition

Blender - Mar, 03/03/2015 - 12:22

Hi there! It’s probably time to make this somewhat official:

Here is a selection of the most pressing “big ticket” animation related developments currently on my todo list. Do note that this is not an exhaustive list (for which there are many other items), but it does contain all the main things that I’m most aware of.

(This is cross-posted from my original post: http://aligorith.blogspot.co.nz/2015/03/animation-system-roadmap.html)

High Priority NLA

* Local Strip Curves – Keyframing strip properties (e.g. time and influence) currently doesn’t update correctly.     [2.75]

Quite frankly, I’m surprised the current situation seems to work as well as it has, because the original intention here (and only real way to solve it properly) is to have dedicated FCurves which get evaluated before the rest of the animation is handled.

I’ve got a branch with this functionality working already – all that’s missing is code to display those FCurves somewhere so that they can be edited (and without being confused for FCurves in the active actions instead). That said, the core parts of this functionality are now solid and back under control in the way it was originally intended.

I originally wanted to get this polished and into master for 2.74 – definitely before Gooseberry start trying to animate, as I know that previous open movie projects did end up using the NLA strip times for stuff (i.e. dragon wings when flying), and the inclusion of this change will be somewhat backwards incompatible (i.e. the data structures are all still there – nothing changed on that front, but there were some bugs in the old version which means that even putting aside the fact you can’t insert keyframes where they’re actually needed, the animations wouldn’t actually get evaluated correctly!).

On a related note – the bug report regarding the renaming NLA strips not updating the RNA Paths: that is a “won’t fix”, as that way of keyframing these properties (that is used in master) was never the correct solution. This fix will just simply blow it all away, so no point piling another hack-fix on top of it all.

* Reference/Rest Track and Animation Layers Support  [2.76]

This one touches on two big issues. Firstly, there’s the bug where, if not all keyframed properties are affected by every strip (or at least set to some sane value by a “reference” strip), you will get incorrect poses when using renderfarms or jumping around the timeline in a non-linear way.

On another front, the keyframing on top of existing layers (i.e. “Animation Layers”) support doesn’t work well yet, because keyframing records the combined value of the stack + the delta-changes applied by the active action that you’re keying into. For this to work correctly, the contributions of the NLA stack must be able to be removed from the result, leaving only the delta changes, thus meaning that the new strip will be accumulated properly.

So, the current plan here is that an explicit “Reference Pose” track will get added to the bottom of NLA stacks. It will always be present, and should include every single property which gets animated in the NLA stack, along with what value(s) those properties should default to in the absence of any contributions from NLA strips.

Alongside this reference track, all the “NlaEvalChannels” will be permanently stored (during runtime only; they won’t get saved to the file) instead of being recreated from scratch each time. They will also get initialised from the Reference Track. Then, this allows the keyframing tools to quickly look up the NLA stack result when doing keyframing, thus avoiding the problems previously faced.

* A better way to retime a large number of strips [2.76/7]

It’s true that the current presentation of strips is not exactly the most compact of representations. To make it easier to retime a large number of strips (i.e. where you might want them to be staggered across a large number of objects, we may need to consider having something like a summary-track in the dopesheet. Failing that, we could just have an alternative display mode which compacts these down for this usecase.

Action Management [2.74, 2.75]

See the Action Management post. The priority of this ended up being bumped up, displacing the NLA fixes from 2.74 (i.e. Local Strip Keyframes) and 2.75 (i.e. Reference Track Support) back by 1-2 releases.

There are also a few related things which were not mentioned in that post (as they did not fit):

* Have some way of specifying which “level” the “Action Editor” mode works on.

Currently, it is strictly limited to the object-level animation of the active object. Nothing else. This may be a source of some of the confusion and myths out there…  (Surely the fact that the icon for this mode uses the Object “cube” is a bit of a hint that something’s up here!)

* Utilities for switching between Dopesheet and NLA.

As mentioned in the Action Management post, there are some things which can be done to make the relationship between these closer, to make stashing and layering workflows nicer.

Also in question would be how to include the Graph Editor in there somehow too… (well, maybe not between the NLA, but at least with the Dopesheet)

*  “Separate Curves” operator to split off FCurves into another action

The main point of this is to split off some unchanging bones from an action containing only moving parts. It also paves the way for other stuff like take an animation made for grouped objects back to working on individual objects.

Animation Editors

* Right-click menus in the Channels List for useful operations on those [2.75]

This should be a relatively simple and easy thing to do (especially if you know what to do). So, it should be easy to slot this in at some point.

* Properties Region for the Action Editor   [2.76]

So, at some point recently, I realised that we probably need to give the Action Editor a dedicated properties region too to deal with things like groups and also the NLA/AnimData/libraries stuff. Creating the actual region is not really that difficult. Again it boils down to time to slot this in, and then figuring out what to put in there.

* Grease Pencil integration into normal Dopesheet [2.76]

As mentioned in the Grease Pencil roadmap, I’ve got some work in progress to include Grease Pencil sketch-frames in the normal dopesheet mode too. The problem is that this touches almost every action editor operator, which needs to be checked to make sure it doesn’t take the lazy road out by only catering for keyframes in an either/or situation. Scheduling this to minimise conflicts with other changes is the main issue here, as well as the simple fact that again, this is not “simple” work you can do when half-distracted by other stuff.

Bone Naming  [2.77]

The current way that bones get named when they are created (i.e. by appending and incrementing the “.xyz” numbers after their names) is quite crappy, and ends up creating a lot of work if duplicating chains like fingers or limbs. That is because you now have to go through, removing these .xyz (or changing them back down to the .001 and .002 versions) before changing the action things which should change (i.e. Finger1.001.L should become Finger2.001.L instead of Finger1.004.L or Finger1.001.L.001).

Since different riggers have different conventions, and this functionality needs to work with the “auto-side” tool as well as just doing the right thing in general, my current idea here is to give each Armature Datablock a “Naming Pattern” settings block. This would allow riggers to specify how the different parts of each name behave.

For example, [Base Name][Chain Number %d][Segment Letter][Separator '.'][Side LetterUpper] would correspond to “Finger2a.L”. With this in place, the “duplicate” tool would know that if should increment the chain number/letter (if just a single chain, while perhaps preparing for flipping the entire side if it’s more of a tree), while leaving the segment alone. Or the “extrude” tool would know to increment the segment number/letter while leaving the chain number alone (and not creating any extra gunk on the end that needs to be cleaned up). The exact specifics though would need to be worked out to make this work well.

Drivers

* Build a dedicated “Safe Python Subset” expression engine for running standard driver expressions to avoid the AutoRun issues

I believe that the majority of driver expressions can be run without full Python interpreter support, and that the subset of Python needed to support the kinds of basic math equations that the majority of such driver expressions use is a very well defined/small set of things.

This set is small enough that we can in fact implement our own little engine for it, with the benefit that it could probably avoid most of the Python overheads as a result, while also being safe from the security risks of having a high-powered turing-complete interpreter powering it. Other benefits here are that this technique would not suffer from GIL issues (which will help in the new depsgraph; oddly, this hasn’t been a problem so far, but I’d be surprised if it doesn’t crop up its ugly head at the worst possible moment of production at some point).

In the case where it cannot in fact handle the expression, it can then just turf it over to the full Python interpreter instead. In such cases, the security limiting would still apply, as “there be dragons”. But, for the kinds of nice + simple driver expressions we expect/want people to use, this engine should be more than ample to cope.

So, what defines a “nice and simple” driver expression?

- The only functions which can be used are builtin math functions (and not any arbitrary user-defined ones in a script in the file; i.e. only things like sin, cos, abs, … would be allowed)

- The only variables/identifiers/input data it can use are the Driver Variables that are defined for that driver. Basically, what I’ve been insisting that people use when using drivers.

- The only “operators” allowed are the usual arithmetic operations: +, -, *, /, **, %

What makes a “bad” (or unsafe) driver expression?

- Anything that tries to access anything using any level of indirection. So, this rules out all the naughty “bpy.data[...]…” accesses and “bpy.context.blah” that people still try to use, despite now being blasted with warnings about it. This limitation is also in place for a good reason – these sorts of things are behind almost all the Python exploits I’ve seen discussed, and implementing such support would just complicate and bloat out little engine

- Anything that tries to do list/dictionary indexing, or uses lists/dictionaries. There aren’t many good reasons to be doing this (EDIT: perhaps randomly chosing an item from a set might count. In that case, maybe we should restrict these to being “single-level” indexing instead?).

- Anything that calls out to a user-defined function elsewhere. This is inherent risk here, in that that code could do literally anything

- Expressions which try to import any other modules, or load files, or crazy stuff like that. There is no excuse… Those should just be red-flagged whatever the backend involved, and/or nuked on the spot when we detect this.

* A modal “eyedropper” tool to set up common “garden variety” 1-1 drivers

With the introduction of the eyedropped tools to find datablocks and other stuff, a precedent has been set in our UI, and it should now be safe to include similar things for adding a driver between two properties. There are of course some complications which arise from the operator/UI code mechanics last time I tried this, but putting this in place should make it easier for most cases to be done.

* Support for non-numeric properties

Back when I initially set up the animation system, I couldn’t figure out what to do with things like strings and pointers to coerce them into a form that could work with animation curves. Even now, I’m not sure how this could be done. That said, while writing this, I had the though that perhaps we could just use the same technique used for Grease Pencil frames?

Constraints

* Rotation and Scale Handling

Instead of trying to infer the rotation and scale from the 4×4 matrices (and failing), we would instead pass down “reference rotation” and “reference scale” values alongside the 4×4 matrix during the evaluation process. Anytime anything needs to extract a rotation or scale from the matrix, it has to adjust that to match the reference transforms (i.e. for rotations, this does the whole “make compatible euler” stuff to get them up to the right cycle, while for scale, this just means setting the signs of the scale factors). If however the rotation/scale gets changed by the constraint, it must also update those to be whatever it is basing its stuff from.

These measures should be enough to combat the limitations currently faced with constraints. Will it result in really ugly code? Hell yeah! Will it break stuff? Quite possibly. Will it make it harder to implement any constraints going forth? Absolutely. But will it work for users? I hope so!

Rigging

It’s probably time that we got a “Rigging Dashboard” or similar…

Perhaps the hardest thing in trying to track down issues in the rigs being put out by guys like JP and cessen these days are that they are so complex (with multiple layers of helper bones + constraints + parenting + drivers scattered all over) to figure out where exactly to start, or which set of rigging components interact to create a particular result.

Simply saying “nodify everything” doesn’t work either. Yes, it’s all in one place now, but then you’ve got the problem of a giant honking graph that isn’t particularly nice to navigate (large graph navigation in and of itself is another interesting topic for another time and date).

Key things that we can get from having such a dashboard are:

1) Identifying cycles easier, and being able to fix them

2) Identifying dead/broken drivers/constraints

3) Isolating particular control chains to inspect them, with everything needed presented in one place (i.e. on a well designed “workbench” for this stuff)

4) Performance analysis tools to figure out which parts of your rig are slow, so that you can look into fixing that.

Medium Priority NLA

* A better way of flattening the stack, with fewer keyframes created

In many cases, it is possible to flatten the NLA without baking out each frame. This only really applies when there are no overlaps, where the keyframes can simply be transposed “as is”. When they do interact though, there may be possibilities to combine these in a smarter way. In the worst case, we can just combine by baking.

* Return of special handling for Quaternions?

I’m currently pondering whether we’ll need to reinstate special handling for quaternion properties, to keep things sane when blending.

* Unit tests for the whole time-mapping math

I’ve been meaning to do this, but I haven’t been able to get the gtests framework to work with my build system yet… If there ever wee a model example of where these things come in handy, it is this!

Animation Editors

* Expose the Animation Channel Filtering API to Python

Every time I see the addons that someone has written for dealing with animation data, I’m admittedly a bit saddened that they do things like explicitly digging into the active object only, and probably only caring about certain properties in there. Let’s just say, “been there done that”… that was what was done in the old 2.42/3 code, before I cleaned it up around 2.43/2.44, as it was starting to become such a pain to maintain it all (i.e. each time a new toggle or datatype was added, ALL the tools needed to be recoded).

These days, all the animation editors do in fact use a nice C API for all things channels-related. Some of it pre-dates the RNA system, so it could be said that there are some overlaps. Then again, this one is specialised for writing animation tools and drawing animation editors, while RNA is generic data access – no comparison basically.

So, this will happen at some point, but it’s not really an urgent/blocking issue for anything AFAIK.

* To support the filtering API, we need a way of setting up or supplying some more general filtering settings that can be used everywhere where there aren’t any the dopesheet filtering options already

The main reason why all the animation editor operators refuse to work outside of those editors is that they require the dopesheet filtering options (i.e. those toggles on the header for each datablock, and other things) to control what they are able to see and affect. If we have some way of passing such data to operators which need it in other contexts (as a fallback), this opens the way up for stuff like being able to edit stuff in the timeline.

As you’ll hopefully be well aware, I’m extremely wary of any requests to add editing functionality to the timeline. On day one, it’ll just be “can we click to select keyframes, and then move them around”, and then before long, it’s “can we apply interpolation/extrapolation/handle types/etc. etc.” As a result, I do not consider it viable to specifically add any editing functionality there. If there is editing functionality for the timeline, it’ll have to be borrowed from elsewhere!

Action Editor/Graph Editor

* Add/Remove Time

Personally I don’t understand the appeal of this request (maybe it’s a Maya thing), but nonetheless, it’s been on my radar/list as something that can be done. The only question is this: is it expected that keyframes should be added to enact a hold when this happens, or is this simply expanding and contracting the space between keyframes.

* Make breakdown keyframes move relative to the main keyframes

In general, this is simple, up until the keyframes start moving over each other. At that point, it’s not clear how to get ourselves out of that pickle…

Small FCurve/Driver/etc. Tweaks

* Copy Driver Variables

* Operators to remove all FModifiers

Motion Capture Data

* A better tool for simplifying dense motion curves

I’ve been helping a fellow kiwi work on getting his curve simplifying algorithm into Blender. So far, its main weakness is that it is quite slow (it runs in exponential time, which sucks  on longer timelines) but has guarantees of “optimal” behaviour. We also need to find some way to estimate the optimal parameters, so that users don’t have to spend a lot of time testing different combinations (why is not going to be very nice, given the non-interactive nature of this).

Feel free to try compiling this and give it a good test on a larger number of files and let us know how you go!

* Editing tools for FSamples

FSamples were designed explicitly for the problem of tackling motion capture data, and should be more suited to this than the heavier keyframes.

Keying Sets

* Better reporting of errors

The somewhat vague “Invalid context” error for Keying Sets comes about because there isn’t a nice way to pipe more diagnostic information in and out of the Keying Sets callbacks which can provide us with that information. It’s a relatively small change, but may be better with

Pose Libraries

* Internal code cleanups to split out the Pose Library API from the Pose Library operators

These used to be able to serve both purposes, but the 2.5 conversion meant that they were quickly converted over to opertator-only to save time. But, this is becoming a bottleneck for other stuff

* Provide Outliner support for Pose Library ops

There’s a patch in the tracker, but this went about this in the wrong way (i.e. by duplicating the code into the outliner). If we get that issue out of the way, this is relatively trivial

* Pose Blending

Perhaps the biggest upgrade that can be made is to retrofit a different way of applying the poses, to be one which can blend between the values in the action and the current values on the rig. Such functionality does somewhat exist already (for the Pose Sliding tools), but we would need to adapt/duplicate this to get the desired functionality. More investigation needed, but it will happen eventually.

* Store thumbnails for Poses + Use the popup gallery (i.e. used for brushes) to for selecting poses

I didn’t originally do this, as at the time I thought that these sorts of grids weren’t terribly effective (I’ve since come around on this, after reading more about this stuff) and that it would be much nicer if we could actually preview how the pose would apply in 3D to better evaluate how well it fits for the current pose (than if you only had a 2D image to work off). The original intent was also to have a fancy 3D gallery, where scrolling through the gallery would swing/slide the alternatively posed meshes in from the sides.

Knowing what I know now, I think it’s time we used such a grid as one of the way to interact with this tool. Probably the best way would be to make it possible to attach arbitrary image datablocks to Pose Markers (allowing for example the ability to write custom annotations – i.e. what phenoms  a mouth space refers to), and to provide some operators for creating these thumbnails from the viewport (i.e. by drawing a region to use).

Fun/Useful but Technically Difficult

There are also a bunch of requests I’d like to indulge, and indeed I’ve wanted to work on them for years. However, these also come with a non-insignificant amount of baggage which means that they’re unlikely to show up soon.

Onionskinning of Meshes

Truth be told, I wanted to do this back in 2010, around the time I first got my hands on a copy of Richard William’s book. The problem though was and remains that of maintaining adequate viewport/update performance.

The most expensive part of the problem is that we need to have the depsgraph (working on local copies of data, and in a separate thread) stuff in place before we can consider implementing this. Even then, we’ll also need to include some point caching stuff (e.g. Alembic) to get sufficient performance to consider this seriously.

Editable Motion Paths

This one actually falls into the “even harder” basket, as it actually involves 3-different “hard” problems:

1) Improved depsgraph so that we can have selective updates of only the stuff that changes, and also notify all the relationships appropriately

2) Solving the IK problem (i.e. changed spline points -> changed joint positions -> local-space transform properties with everything applied so that it works when propagated through the constraints ok). I tried solving this particular problem 3 years ago, and ran into many different little quirky corner cases where it would randomly bug/spazz out, flipping and popping, or simply not going where it needs to go because the constraints exhibit non-linear behaviour and interpret the results differently.  This particular problem is one which affects all the other fun techniques I’d like to use for posing stuff, so we may have to solve this once and for all with an official API for doing this. (And judging from the problems faced by the authors of various addons – including the current editable motion paths addon, and also the even greater difficulties faced by the author of the Animat on-mesh tools, it is very much a tricky beast to tame)

3) Solving the UI issues with providing widgets for doing this.

Next-Generation Posing Tools

Finally we get to this one. Truth be told, this is the project I’ve actually been itching to work on for the past 3 years, but have had to put off for various reasons (i.e. to work on critical infrastructure fixes and also for uni work). It is also somewhat dependent on being able to solve the IK problem here (which is a recurring source of grief if we don’t do it right).

If you dig around hard enough, you can probably guess what some of these are (from demos I’ve posted and also things I written in various places). The short description though is that, if this finally works in the way I intend, we’ll finally have an interface that lets us capture the effortless flow, elegance, and power of traditional animating greats like Glen Keane or Eric Goldberg – for having a computer interface that allows that kind of fluid interaction is one my greatest research interests.

Closing Words

Looking through this list, it looks like we’ve got enough here for at least another 2-3 years of fun times

Categorías: Diseño 3D

Future viewport, the design

Blender - Mar, 12/02/2014 - 17:34

As outlined in the previous post there are some technical and feature targets we want to achieve. Recapping here:

1) Performance boost for drawing code. Make sure we use the best drawing method always to pass data to the GPU/Support features that are only available on new OpenGL that will enable better performance and code.

2) Node based material definition for viewport – and definition of a new real – time material system used for rendering (GLSL renderer).

3) Compositing. Includes things such as outlines, depth of field, ambient occlusion, HDR, bloom, flares.

4) Support mobile devices (OpenGL ES).

What is the state so far:

* Limited compositing (in viewport_experiments branch). When we say limited we mean that the compositor is not tied up to the interface properly, rather it just applies effects to the whole contents of the framebuffer. What we would want ideally, is to not allow UI indicators, such as wires or bones from affecting compositing. This is not too hard to enforce though and can be done similarly to how the current transparency/Xray system works, by tagging wire objects and adding them to be rendered on top of compositing.

* Some parts of our mesh drawing code use Vertex Buffer Objects in an optimal way, others do but still suffer from performance issues by not doing it right, while others do not use it at all.

How will the soc_2014_viewport_fx branch help achieving the targets?

Soc-2014_viewport_fx is providing a layer that can be used to migrate to newer or mobile versions of OpenGL with less hastle, but also tries to enforce some good rendering practices along the way, such as the requirement in modern versions of OpenGL that everything is rendered through Vertex Buffer Objects. Also it removes GLU from the dependencies (since it uses deprecated OpenGL functionality).

Also it sets in place some initial functionality so things can be drawn using shaders exclusively. This is essential if we move to modern or mobile OpenGL versions at some point.

So it mostly helps with targets 1 and 4, but more work will need to be done after merging to realize those targets fully.

At some point, if we want to support modern or mobile OpenGL, we can’t avoid rewriting a big part of our realtime rendering code. The branch already takes some care of that so the branch should be merged and worked on (merging is the first step really), unless we do not really care about supporting those platforms and features.

My estimation, from personal experiments with manual merging, is that it would take about 2-3 weeks of full time work to bring the branch to master-readiness.

Can we focus on some targets immediately?

Yes we can. Some targets such as node materials or compositing, just assume GLSL support in mesh drawing which is yet to be realized in the branch fully so it’s not really blocking their progress. However, getting the branch in as soon as possible will mean less headaches during the merge.

Viewport usability design

Draw modes

Draw modes are getting a little bit unpredictable as to what they enable and are more tied to a real time material definition limited to specular/diffuse/textured. They are also bound to the texture face data structure which is becoming less relevant since we are slowly moving to a material based approach. Often artists have to tweak a number of material and object options to get the visual feedback they need, which can also be frustrating and it is not apparent to new users either. We need a design which allows artists to easily work on a particular workflow while being able to visualize what they want without extensive guesswork of how to visualize this best. Ideally we want to drop draw modes in favour of…

Workflow modes (model, sculpt, paint, animation, game shader design)

Different workflows require different data, and different visualizations. So we can define ‘workflow modes’, which includes a set of shaders and visualization options authored specifically for the current workflow. For instance, a ‘workbench’ mode in edit mode will have a basic diffuse and specular shader with wireframe display options. For retopology, it would make sense to use more minimal, transparent mesh display, like hidden wire, with depth offsetting to avoid intersection artifacts.

Example image of edit mode display options. Some options exist to aid in specific workflows, but this is not so readily apparent

For material definition or texture painting, users might want the full final result or an unshaded version of it for detail tweaking.

Debugging (logic, rigging, etc)

Drawing can offer visual feedback to make it easier for users to examine problematic areas in their scenes. Examples include order of dependency calculation or color-encoded vertex and face counts, or even debug options available to developers.


Easy to switch from one to another, easy to config or script

Using the workflow system, users should be able to get their display to be more predictable. Each workflow mode can expose settings for the shaders or passes used but we can allow more customization than this. A node interface will allow users to request data from blender and write their own shaders to process and visualize these data in their own way. We will follow the OSL paradigm with a dedicated node that will request data from blender in the form of data attribute inputs connected to the node. The data request system is at the heart of the new data streaming design and this means that materials and custom shaders should be able to request such data. Probably even access to real time compositing will be included, though memory consumption is a concern here, and we need to better define how data will be requested in that case.


Modernize! Assume that users will always want the best, most realistic, etc.

With the capabilities modern real time shading offers, we aim to add a third render engine using OpenGL, (next to internal and cycles) which can leverage the capabilities of modern GPUs and tailored to make real time rendering a real alternative for final rendering in blender. A lot of the components are already there, but we can push it further, with shader implementations optimized especially for real time rendering instead of trying to mimic an off-line renderer.

We want to make sure that our material display is pleasing, so we are exploring more modern rendering methods such as physically based shading (a patch by Clement Foucault using notes from Unreal Engine 4 is already considered for inclusion) and deferred rendering.

Needless to say this will also mean improved preview of materials for blender internal and cycles.

Categorías: Diseño 3D

Viewport project – targets, current state of the code

Blender - Lun, 09/22/2014 - 13:52

Depth of field in progress

Encompassing a broad issue with decentralized code such as real time drawing under the umbrella of the “Viewport” project, might be slightly misleading. The viewport project, essentially encapsulates a few technical and artistic targets such as:

  • Performance improvement in viewport drawing, allowing greater vertex counts
  • Shader driven drawing – custom/user driven or automatic for both internal materials and postprocessing in viewport (includes eye candy targets such as HDR viewport, lens flares, PBR shaders, depth of field)
  • Portability of drawing code – this should allow us to switch with as little pain as possible to future APIs and devices such as OpenGLES compatible devices

These targets include code that has already been written as part of blender, as part of the viewport GSOC projects by Jason Wilkins, and will also require more code and a few decisions on our part to make them work. One of those decisions is about the version of OpenGL that will be required for blender from now on. First, we should note that OpenGL ES 2.0 for mobile devices is a good target to develop for, when we support mobile devices in the future, given those stats. OpenGL ES 2.0 means, roughly, that we need programmable shading everywhere – fixed function pipeline does not exist in that API. Also, using programmable shading only will allow us to easily upgrade to a pure OpenGL 3.0+ core profile if/when we need to, since modern OpenGL also has no fixed pipeline anymore. For non-technical readers, OpenGL 3.0+ has two profiles, “compatibility” and “core”. While compatibility is backwards compatible with previous versions of OpenGL, core profile throws out a lot of deprecated API functionality and vendors can enable more optimizations in those profiles, since they do not need to take care of breaking compatibility with older features. Upgrading is not really required, since we can already use an OpenGL 3.0+ compatibility profile in most OS’s (with the exception of OSX), and OpenGL extensions allow us to use most features of modern OpenGL. Upgrading to core 3.0 would only enforce us to use certain coding paradigms in OpenGL that are guaranteed to be “good practice”, since deprecated functionality does not exist there. Note though, that those paradigms can be enforced now (for instance, by using preprocessor directives to prohibit use of the deprecated functions, as done in the viewport GSOC branch), using OpenGL 2.1. So let’s explore a few of those targets, explaining ways to achieve them:

  • Performance:

This is the most deceptive target. Performance is not just a matter of upgrading to a better version of OpenGL (or to another API such as Direct X, as has been suggested in the past). Rather, it is a combination of using best practices when drawing, which are not being followed everywhere currently, and using the right API functions. In blender code we can benefit from:

  1. Avoid CPU overhead. This is the most important issue in blender. Various drawing paths check every face/edge state that is sent to the GPU before sending them. Such checks should be cached and invalidated properly. This alone should make drawing of GLSL and textured meshes much faster. This requires rethinking our model of derivedmesh drawing. Current model uses polymorphic functions in our derived meshes to control drawing. Instead, drawing functions should be attached to the material types available for drawing instead and derived meshes should have a way to provide materials with the requested data buffers for drawing. A change that will drastically improve the situation for textured drawing is redesigning the way we handle texture images per face. The difficulty here is that every face can potentially have a different image assigned, so we cannot make optimizing assumptions easily. To support this, our current code loops over all mesh faces every frame -regardless of whether the display data have changed or not- and checks every face for images. This is also relevant to minimizing state changes – see below.
  2. Minimize state changes between materials and images. If we move to a shader driven pipeline this will be important, since changing between shaders incurs more overhead than simply changing numerical values of default phong materials.
  3. Only re-upload data that need re-uploading. Currently, blender uploads all vertex data to the GPU when a change occurs. It should be possible to update only a portion of that data. E.g, editing UVs only updates UV data, if modifiers on a mesh are deform type only, update only vertices etc. This is hard to do currently because derivedmeshes are completely freed on mesh update, and GPU data reside on the derivedmesh.
  4. Use modern features to accelerate drawing. This surely includes instancing APIs in OpenGL (attribute, or uniform based) – which can only be done if we use shaders. Direct state access APIs and memory mapping, can help eliminate driver overhead. Uniform buffer objects are a great way to pass data across shaders without rebinding uniforms and attributes per shader, however they require shading language written explicitly for OpenGL 3.0+. Transform feedback can help avoiding vertex streaming overhead in edit mode drawing, where we redraw the same mesh multiple times. Note that most of those are pretty straightforward and trivial to plug in, once the core that handles shader-based, batch-driven drawing has been implemented.
  • Shader Driven Drawing

The main challenge here is the combinatorial explosion of shaders (ie shader uses lighting or not, uses texturing or not, is dynamically generated from nodes etc,etc). Ideally we want to avoid switching shaders as much as possible. This can be trivially accomplished by drawing per material as explained above. We could probably implement a hashing scheme where materials that share the same hash also share the same shader, however this would incur its own overhead. Combinations are not only generated by different material options, but also from various options that are used in painting, editors, objects, even user preferences. The aspect system in the works in the GSOC viewport branch attempts to tackle the issue by using predefined materials for most of blender’s drawing, where of course we use parameters to tweak the shaders. Shader driven materials open the door to other intersting things, such as GPU instancing, and even deferred rendering. For the latter we do some experiments already in the viewport_experiments branch. For some compositing effects, we can reconstruct the world space position and normals even now using a depth buffer, but this is expensive. Using a multi-render target approach here will help with performance but again, this needs shader support. For starters though we can support a minimum set of ready-made effects for viewport compositing. Allowing full blown user compositing or shading requires having the aforementioned material system where materials or effects can request mesh data appropriately. Shader driven drawing is of course important for real time node-driven GLSL materials and PBR shaders too. These systems need a good tool design still, maybe even a blender internal material system redesign, which would be much more long term if we do it. Some users have proposed a separate visualization system than the renderers themselves. How it all fits together and what expectations it creates is still an open issue – will users expect to get the viewport result during rendering, or do we allow certain shader-only real time eye candy, with a separate real time workflow?

Screen Space Ambient Occlusion shader on a sculpted mesh

  • Portability

Being able to support multiple platforms – in other words multiple OpenGL versions or even graphics APIs – means that we need a layer that handles all GPU operations and allows no explicit OpenGL in the rest of the code, allowing us to basically replace the GPU implementation under blender transparently. This has already been handled in the GSOC viewport 2013 branch (the 2014 branch is just the bare API at the moment, not hooked into the rest of blender), with code that takes care of disallowing OpenGL functions outside the gpu module. That will mean GLES and mobile device support support, which is something Alexandr Kuznetsov has worked on and demostrated a few years back.

  • Conclusion

As can be seen some of those targets can be accomplished by adjusting the current system, while other targets are more ambitious and long term. For gooseberry, our needs are more urgent than the long term deliverables of the viewport project, so we will probably focus on a few pathological cases of drawing and a basic framework for compositing (which cannot really be complete until we have a full shader-driven pipeline). However in collaboration with Jason and Alexandr we hope to finish and merge the code that will make those improvements possible on a bigger scale.

Categorías: Diseño 3D

Hair System Roadmap

Blender - Lun, 09/08/2014 - 18:06

The Blender hair system will get a number of improvements for the Gooseberry project. Especially the hair dynamics have to be improved and integrated better into the set of artistic tools to allow animators to control and tweak the hair system efficiently. We have a number of goals that should make hair modelling and simulation into a more flexible and helpful tool.

Solver Stability

Animation tools for hair are quite useless without a stable physical solver. Especially for long hairs a physical solver is a valuable tool for generating believable motion. The solver for the simulation has to be very stable, meaning that it produces correct values (no “explosions”) and does not introduce additional motion due to numerical errors (jiggling).

The current solver for the hair dynamics has a number of issues, resulting from conflicts in the mixed cloth/hair model, questionable assumptions in the force model and plain bugs. To avoid these issues the numerical solver implementation will be replaced by a modified Eigen-based solver. Eigen is a library for linear algebra that is already used in Blender and provides a lot of optimizations that would be hard to introduce otherwise.

Numerical Solver Overview (since this is a code blog)

The physical model for hair systems defines each hair as a series of points, connected by “springs”. In addition there are a couple of external influences that have to be accounted for. The physical equations boil down to calculating changes in positions and velocities of these points.

Our solver then has the task of calculating these Δx and Δv so that the result is as close as possible to the actual value. As a first-order approximation and using sensible force models the differential equations can be expressed as a linear system A·Δv = b (See the References section for in-depth information). The algorithm of choice for solving this system is the Conjugate Gradient method. The Eigen library provides a nice set of CG algorithms already.

Unfortunately, for a constrained system such as a hair structure with “pinned” hair root points as well as collision contacts (see below) the basic CG solver is not enough. We need to extend the method somewhat to take constraints into account and limit the degrees-of-freedom in the solution selectively. The paper by Baraff/Witkin describes this modification in detail.

Hair Volume and Friction

Hair and fur coats need a number of features that notoriously difficult to model in a hair simulation: Volume and Friction. “Volume” is the phenomenon where a lot of hairs closely together will push each other away and leave empty space between them (especially curly hair). “Friction” is what makes entangled hair so difficult to comb, because hairs stick together and have lots of surface area.

Both these effects could be naively modeled by hair-hair collisions, but this is prohibitively expensive due to the potential number of collision pairs. A more economical approach is to model the cumulative effect of hairs using a voxel grid. This feature has already been implemented.

Collisions

Collisions are essential for believable simulation results, but so far don”t exist in for hair simulation in Blender (only a volume-based friction model which is a poor replacement).

The first stage in collision handling is to actually detect intersection of hair segments with meshes. This is done in two distinct phases to speed up the process:

  • Broadphase: The hair segment is tested against the bounding boxes of eligible colliders to narrow down the number of pairs. Acceleration structures can speed up the process of finding overlapping pairs.
  • Nearphase: The potential intersection pairs are tested for actual intersection of the detailed geometry.

The detection of collision pairs is currently handled by a BVH tree based structure. In the future it may become advisable to use the Bullet collision detection for finding such pairs, since it has a lot better optimizations for complicated intersection tests and broadphase filtering.

The second stage is to actually make a hair particle react to a collision, so that the hair is prevented from entering the mesh object. A simple approach is to generate a repulsion force which pushes outward from the mesh. However, this force can cause a lot of unwanted motion. The effect is that a hair particle can not stably come to rest on a surface or even the simulation can “explode” when a particle gets trapped in a collider cavity and it”s velocity increases exponentially from repeated collision responses.

A much more elegant and stable approach to handling collision response is to define the contact between a hair and a mesh as a “constraint”: When the hair collides with a surface it”s motion becomes restricted in the direction of the surface normal (while moving tangentially is still possible and desired to relax internal spring forces). An implicit solver can be modified so that collision constraints are taken into account and jittering effects as well as spring instability is largely avoided.

Physics Settings

Settings in the hair dynamics panel need reorganization to be more intuitive and allow easier tweaking. Naming there is currently misleading and as a consequence artists seem to tend to overconstrain the hair system by steadily increasing forces, until eventually the solver gives up and the simulation “explodes”.

The suggested changes would group the dynamics settings into four categories:

  1. Internal Forces: Structural features of the hairs in general (Bending, Stretching, Damping)
  2. Interaction: Friction and Volume Pressure settings, caused by concentrations of hair in the same space
  3. Collision: Bounciness (restitution) and friction of the hair
  4. External Forces: Effect of various force field types on the hair system

To avoid the problem of counterbalancing forces this ordering should suggest a sensible workflow. Starting with the internal forces results in natural behavior of individual hairs. Setting up friction and damping effects secondarily should help avoid the problem of masking extreme forces by equally strong damping, which creates an “explosive” setup that is hard to control.

Each of the categories can be disabled on its own. This also helps to fix issues with either of the influences in case something goes wrong. Otherwise the only way to test the hair dynamics settings is to reset them to zero individually.

Presets could be another simple but effective way to facilitate tweaking. A fine-tuned group of settings can then be stored for later use or to generate variants from.

Guide Hairs

Editing parent hairs on Koro

Physical simulation is only one tool among many in 3D animation production. A major goal for the hair system is to improve tools for artists and combine classic keyframe animation with simulation. The current workflow of the particle hairs gives animators very little control over the simulation beyond the initial setup phase (“grooming”). The results of a simulation never turn out exactly as desired, and so it is very important that animators be able to define corrections to simulation results.

An important concept for simulation control is the rest position of hairs, i.e. the “natural” shape and orientation a hair will be attracted to by the internal bending forces and additional (non-physical) goal spring forces. This rest position is currently defined as a single shape. Defining keyframes for particle system/hair is a clumsy process with a lot of overhead and far from a usable tool. After baking the entire simulation artists can also modify the point cache data, treating the motion of each hair point as a curve, but this is also limited and doesn”t scale well to large hair systems.

Guide Hairs would solve the problem of keyframing the hair rest positions. They are the primary data structure that animators work with, using sculpting/grooming tools and keyframes if necessary. They are roughly equivalent to the current base hair system, although for clarity renaming them is a good idea.

Simulation Hairs form the second data layer in the hair system. They are initially generated from the guide hairs (which also form the sim hairs” natural rest position). We have to decide how to display and distinguish these layers in the viewport, but it should be clear to artists that these are separate concepts.

Note that there could actually be more simulation hairs than guide hairs! This is an important feature which allows animators to work on a small set of hairs (easy to set up and control), while having more detail in simulations such as colliding with small objects. Generating simulation hairs can use the same interpolation approach as current child hairs.

Render Hairs are the current “child” hairs. They are not stored as permanent data and don”t carry state information of their own. Their purpose is only to generate sufficient visual detail for renderers. Render hairs can incorporate quite a few shaping features of their own, such as randomness, curling or tapering.

Further Reading

“Large Steps in Cloth Simulation” (Baraff/Witkin 1998): Extensive paper on the use of a modified Conjugate Gradient solver for cloth systems, including useful chapters on force derivations, constraints and collisions.

“Simulating Complex Hair with Robust Collision Handling” (Choe/Choi/Ko 2005): Detailed description of a hair collision response model using the CG solver method

“Artistic Simulation of Curly Hair” (Pixar technical paper, “Brave”): Very sophisticated hair model for long curly hair (collisions are too much for our purposes, but the bending model is very nice)

“Volumetric Methods for Simulation and Rendering of Hair” (Pixar technical paper, “The Incredibles”): Describes in detail the volumetric approach to hair-hair friction modeling

Categorías: Diseño 3D

Anamorphic Bokeh

Blender - Jue, 08/21/2014 - 16:42

Cycles allows for photo-realistic rendering. Part of the realism comes from the simulation of photography parameters, such as lens, aperture size, and depth of field. When simulating anamorphic lens, there is something Cycles still miss which is anamorphic bokeh.

Anamorphic Bokeh Perspective Test

Generally speaking “bokeh” is the shape we see from far away blurred light sources. It’s more evident in night shots. When working with anamorphic lens (or when simulating them in Cycles) it’s important to stretch the bokeh according to the simulated lens.

Anamorphic Bokeh Fisheye Test

In a normal close up scene the effect is subtle but gives an extra cinematographic effect. Compare this test-render from the Gooseberry Open Movie. From top to bottom we have a fisheye render, a fisheye render with anamorphic bokeh of 2.0, and fisheye render with anamorphic bokeh of 3.0:

Frank Fisheye Regular Bokeh

Frank Fisheye Anamorphic Bokeh 2.0

Frank Fisheye Anamorphic Bokeh 3.0

Too subtle? Click on the images for a zoom-up version or look closely at the animated comparison:

Anamorphic Bokeh Frank Test

Another shot, now with 1.0 (normal bokeh), 2.0, 3.0 and 10.0.

Frank Bokeh 1.0 Fisheye

Frank Anamorphic Bokeh 2.0 Fisheye

Frank Anamorphic Bokeh 3.0 Fisheye

Frank Anamorphic Bokeh 10.0 Fisheye

In cinema we often see works done with bokeh 1.33, 1.5 or for old movies 2.0. Nothing stops us from simulating other values as we demonstrated here.

Frank Anamorphic Bokeh Fisheye - Animated

This feature is aimed at Blender 2.72, so stay tuned and prepare your night shots. A special thank you for Aldo Zang for the help with the math part of the patch. Test scenes and feature request by Mathieu Auvrey.

Cheers,
Dalai Felinto

Update: The patch is currently for review [here].

Categorías: Diseño 3D

New Game Engine Publishing Addon

Blender - Vie, 06/27/2014 - 08:27

One of the common complaints with the Blender Game Engine is with publishing games. While there are many issues related to publishing with the BGE, one issue is the lack of a simple, user-friendly way to publish to multiple platforms. Steps are being taken to resolve this with a new Game Engine Publishing addon that has been recently committed to master (should be available in buildbot builds by now). This addon is intended to replace the old Save As Runtime addon, and currently provides the following improvements:

  • New panel in the Render Properties to control publishing (this also means publishing options are saved in the blend file)
  • Easier cross-platform publishing (this requires downloading the binaries for the desired platforms, see the addon’s wiki page for more information)
  • Ability to create archives (e.g., tarballs and zips) for published games
  • Ability to automatically copy extra game files (e.g., scripts, unpacked textures, logic, other blend files, etc.) when publishing

Screenshot of the current addon

This addon is still a work in progress, but users are encouraged to start playing with the addon and providing feedback. Some current goals for the addon include:

  • Creating a better way to download needed binaries for publishing to other platforms (the current operator for doing this hangs Blender until it is done downloading, which can take a while)
  • Add an option to compile scripts
  • Add a way to ignore files when copying assets (e.g., __pycache__ folders, *.xcf, *.psd‘s,)

More information about the addon as well as some documentation can be found on the addon’s wiki page.

Categorías: Diseño 3D