posicionamiento en buscadores

You are here

Distribuir contenido

Viewport project – targets, current state of the code

Blender - Lun, 09/22/2014 - 13:52

Depth of field in progress

Encompassing a broad issue with decentralized code such as real time drawing under the umbrella of the “Viewport” project, might be slightly misleading. The viewport project, essentially encapsulates a few technical and artistic targets such as:

  • Performance improvement in viewport drawing, allowing greater vertex counts
  • Shader driven drawing – custom/user driven or automatic for both internal materials and postprocessing in viewport (includes eye candy targets such as HDR viewport, lens flares, PBR shaders, depth of field)
  • Portability of drawing code – this should allow us to switch with as little pain as possible to future APIs and devices such as OpenGLES compatible devices

These targets include code that has already been written as part of blender, as part of the viewport GSOC projects by Jason Wilkins, and will also require more code and a few decisions on our part to make them work. One of those decisions is about the version of OpenGL that will be required for blender from now on. First, we should note that OpenGL ES 2.0 for mobile devices is a good target to develop for, when we support mobile devices in the future, given those stats. OpenGL ES 2.0 means, roughly, that we need programmable shading everywhere – fixed function pipeline does not exist in that API. Also, using programmable shading only will allow us to easily upgrade to a pure OpenGL 3.0+ core profile if/when we need to, since modern OpenGL also has no fixed pipeline anymore. For non-technical readers, OpenGL 3.0+ has two profiles, “compatibility” and “core”. While compatibility is backwards compatible with previous versions of OpenGL, core profile throws out a lot of deprecated API functionality and vendors can enable more optimizations in those profiles, since they do not need to take care of breaking compatibility with older features. Upgrading is not really required, since we can already use an OpenGL 3.0+ compatibility profile in most OS’s (with the exception of OSX), and OpenGL extensions allow us to use most features of modern OpenGL. Upgrading to core 3.0 would only enforce us to use certain coding paradigms in OpenGL that are guaranteed to be “good practice”, since deprecated functionality does not exist there. Note though, that those paradigms can be enforced now (for instance, by using preprocessor directives to prohibit use of the deprecated functions, as done in the viewport GSOC branch), using OpenGL 2.1. So let’s explore a few of those targets, explaining ways to achieve them:

  • Performance:

This is the most deceptive target. Performance is not just a matter of upgrading to a better version of OpenGL (or to another API such as Direct X, as has been suggested in the past). Rather, it is a combination of using best practices when drawing, which are not being followed everywhere currently, and using the right API functions. In blender code we can benefit from:

  1. Avoid CPU overhead. This is the most important issue in blender. Various drawing paths check every face/edge state that is sent to the GPU before sending them. Such checks should be cached and invalidated properly. This alone should make drawing of GLSL and textured meshes much faster. This requires rethinking our model of derivedmesh drawing. Current model uses polymorphic functions in our derived meshes to control drawing. Instead, drawing functions should be attached to the material types available for drawing instead and derived meshes should have a way to provide materials with the requested data buffers for drawing. A change that will drastically improve the situation for textured drawing is redesigning the way we handle texture images per face. The difficulty here is that every face can potentially have a different image assigned, so we cannot make optimizing assumptions easily. To support this, our current code loops over all mesh faces every frame -regardless of whether the display data have changed or not- and checks every face for images. This is also relevant to minimizing state changes – see below.
  2. Minimize state changes between materials and images. If we move to a shader driven pipeline this will be important, since changing between shaders incurs more overhead than simply changing numerical values of default phong materials.
  3. Only re-upload data that need re-uploading. Currently, blender uploads all vertex data to the GPU when a change occurs. It should be possible to update only a portion of that data. E.g, editing UVs only updates UV data, if modifiers on a mesh are deform type only, update only vertices etc. This is hard to do currently because derivedmeshes are completely freed on mesh update, and GPU data reside on the derivedmesh.
  4. Use modern features to accelerate drawing. This surely includes instancing APIs in OpenGL (attribute, or uniform based) – which can only be done if we use shaders. Direct state access APIs and memory mapping, can help eliminate driver overhead. Uniform buffer objects are a great way to pass data across shaders without rebinding uniforms and attributes per shader, however they require shading language written explicitly for OpenGL 3.0+. Transform feedback can help avoiding vertex streaming overhead in edit mode drawing, where we redraw the same mesh multiple times. Note that most of those are pretty straightforward and trivial to plug in, once the core that handles shader-based, batch-driven drawing has been implemented.
  • Shader Driven Drawing

The main challenge here is the combinatorial explosion of shaders (ie shader uses lighting or not, uses texturing or not, is dynamically generated from nodes etc,etc). Ideally we want to avoid switching shaders as much as possible. This can be trivially accomplished by drawing per material as explained above. We could probably implement a hashing scheme where materials that share the same hash also share the same shader, however this would incur its own overhead. Combinations are not only generated by different material options, but also from various options that are used in painting, editors, objects, even user preferences. The aspect system in the works in the GSOC viewport branch attempts to tackle the issue by using predefined materials for most of blender’s drawing, where of course we use parameters to tweak the shaders. Shader driven materials open the door to other intersting things, such as GPU instancing, and even deferred rendering. For the latter we do some experiments already in the viewport_experiments branch. For some compositing effects, we can reconstruct the world space position and normals even now using a depth buffer, but this is expensive. Using a multi-render target approach here will help with performance but again, this needs shader support. For starters though we can support a minimum set of ready-made effects for viewport compositing. Allowing full blown user compositing or shading requires having the aforementioned material system where materials or effects can request mesh data appropriately. Shader driven drawing is of course important for real time node-driven GLSL materials and PBR shaders too. These systems need a good tool design still, maybe even a blender internal material system redesign, which would be much more long term if we do it. Some users have proposed a separate visualization system than the renderers themselves. How it all fits together and what expectations it creates is still an open issue – will users expect to get the viewport result during rendering, or do we allow certain shader-only real time eye candy, with a separate real time workflow?

Screen Space Ambient Occlusion shader on a sculpted mesh

  • Portability

Being able to support multiple platforms – in other words multiple OpenGL versions or even graphics APIs – means that we need a layer that handles all GPU operations and allows no explicit OpenGL in the rest of the code, allowing us to basically replace the GPU implementation under blender transparently. This has already been handled in the GSOC viewport 2013 branch (the 2014 branch is just the bare API at the moment, not hooked into the rest of blender), with code that takes care of disallowing OpenGL functions outside the gpu module. That will mean GLES and mobile device support support, which is something Alexandr Kuznetsov has worked on and demostrated a few years back.

  • Conclusion

As can be seen some of those targets can be accomplished by adjusting the current system, while other targets are more ambitious and long term. For gooseberry, our needs are mode urgent than the long term deliverables of the viewport project, so we will probably focus on a few pathological cases of drawing and a basic framework for compositing (which cannot really be complete until we have a full shader-driven pipeline). However in collaboration with Jason and Alexandr we hope to finish and merge the code that will make those improvements possible on a bigger scale.

Categorías: Diseño 3D

LibrePlanet is coming March 21-22, 2015, call for proposals now open for annual free software conference

FSF - Mar, 09/16/2014 - 21:56

LibrePlanet is an annual conference for free software enthusiasts. The conference brings together software developers, policy experts, activists and computer users to learn skills, share accomplishments and face challenges to software freedom. Newcomers are always welcome, and LibrePlanet 2015 will feature programming for all ages and experience levels.

This year, the theme of LibrePlanet is "Free Software Everywhere." The call for sessions seeks talks that touch on the many places and ways that free software is used around the world, as well as ways to make free software ubiquitous. Proposals are encouraged to consider "everywhere" in the broadest sense of the word. LibrePlanet 2015 will take software freedom around the world, to outer space, and consider its role in industry, government, academia, community organizing, and personal computing.

"LibrePlanet is one of the most rewarding things we do all year. This conference brings people from all over the planet who want to make the world a better place with free software," said John Sullivan, executive director of the FSF.

Call for Sessions

"I hope we'll receive session proposals from people with all levels of speaking and technical experience; you don't have to be a coder to speak at LibrePlanet. Free software users, activists, academics, policymakers, developers, and others are all key contributors to the free software movement, and we want to showcase all of these skills at LibrePlanet 2015," said Libby Reinish, a campaigns manager at the FSF.

Call for sessions applications are currently being accepted at https://www.libreplanet.org/2015/call_for_sessions and are due by Sunday, November 2nd, 2014 at 19:59 EST (23:59 UTC).**

About LibrePlanet

LibrePlanet is the annual conference of the Free Software Foundation, and is co-produced by the Student Information Processing Board. What was once a small gathering of FSF members has grown into a larger event for anyone with an interest in the values of software freedom. LibrePlanet is always gratis for associate members of the FSF. To sign up for announcements about LibrePlanet 2015, visit https://www.libreplanet.org/2015.

LibrePlanet 2014 was held at MIT from March 22-23, 2014. Over 350 attendees from all over the world came together for conversations, demonstrations, and keynotes centered around the theme of "Free Software, Free Society." You can watch videos from past conferences at http://media.libreplanet.org.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Libby Reinish
Campaigns Manager
Free Software Foundation
+1 (617) 542 - 5942
campaigns@fsf.org

###

Categorías: Software Libre

ThinkPenguin wireless router now FSF-certified to respect your freedom

FSF - Vie, 09/12/2014 - 23:45

The TPE-NWIFIROUTER comes pre-installed with libreCMC, an FSF-endorsed embedded GNU/Linux distribution.

"This is a big step forward for computer user freedom. For the first time, you can purchase a router that ships with only free software preinstalled. This router and OS give us a platform that we can trust and control, and that the community can use to begin building our own, free software based network for communication, file sharing, social networking, and more," said FSF's executive director John Sullivan.

This is the third product by ThinkPenguin to be awarded the use of the RYF certification mark. The first two were the TPE-N150USB Wireless N USB Adapter and the long-range TPE-N150USBL model.

Christopher Waid, ThinkPenguin's founder and CEO, said, "ThinkPenguin, Inc. was founded with the goal of making free software more easily adoptable by the masses. Everyone needs a wireless router in their homes, and so I am very proud that we are able to offer users a router that ships with 100% free software installed and that is backed by a reputable certification process provided by the FSF."

To learn more about the Respects Your Freedom hardware certification, including details on the certification of the TPE-N150USB Wireless N USB adapter, as well as information on the driver and firmware for the device, visit http://www.fsf.org/ryf. Hardware sellers interested in applying for certification can consult http://www.fsf.org/resources/hw/endorsement/criteria.

Subscribers to the FSF's Free Software Supporter newsletter will receive announcements about future Respects Your Freedom products.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About ThinkPenguin, Inc

Started by Christopher Waid, founder and CEO, ThinkPenguin, Inc. is a consumer-driven company with a mission to bring free software to the masses. At the core of company is a catalog of computers and accessories with broad support for GNU/Linux. The company provides technical support for end-users and works with the community, distributions, and upstream projects to make GNU/Linux all that it can be.

Media Contacts

Joshua Gay
Licensing & Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Media Inquires
ThinkPenguin, Inc.
+1 (888) 39 THINK (84465) x703
media@thinkpenguin.com

###

Categorías: Software Libre

Free Software Foundation statement on the new iPhone, Apple Pay, and Apple Watch

FSF - Mar, 09/09/2014 - 20:27

Today, Apple announced new iPhone models, a watch, and a payment service. In response, FSF executive director John Sullivan made the following statement:

It is astonishing to see so much of the technology press acting as Apple's marketing arm. What's on display today is widespread complicity in hiding the most newsworthy aspect of the announcement -- Apple's continuing war on individual computer user freedom, and by extension, free speech, free commerce, free association, privacy, and technological innovation.

Every review that does not mention Apple's insistence on using Digital Restrictions Management (DRM) to lock down the devices and applications they sell is doing an extreme disservice to readers, and is a blow to the development of the free digital society we actually need. Any review that discusses technical specs without first exposing the unethical framework that produced those products, is helping usher people down a path that ends in complete digital disempowerment.

Keep a tally of how many reviews you read today mention that Apple threatens anyone who dares attempt installing another operating system like Android on their Apple phone or watch with criminal prosecution under the Digital Millennium Copyright Act (DMCA). Keep a tally of how many reviews mention that Apple devices won't allow you to install any unapproved applications, again threatening you with jail time if you attempt to do so without Apple's blessing. Keep a tally of how many reviews highlight Apple's use of software patents and an army of lawyers to attack those developing a more free computing environment than theirs.

We've seen several examples since the last Apple product announcement of times when smartphones and other computers have been used for political activism and important free speech. We've also seen several examples of times when such expressions have been censored. If we continue allowing Apple this kind of control, censorship and digital "free speech zones" will become the permanent norm.

There is a reason that the inventor of the personal computer shuns Apple devices as antithetical to vital kinds of creativity. But it's not enough to just say "Don't buy their products." The laws Apple and others use to enforce their digital restrictions, giving them a subsidized competitive advantage over products that respect user freedom, must be repealed.

At least the watch did end up having a clasp so you can remove it -- we were worried.

We urge users to investigate ways to support the use of mobile and wearable devices which do not restrict users' essential freedoms. Such projects include Replicant, a free software fork of Android, and F-Droid, an app repository of exclusively free software for Android. People should also let Tim Cook at Apple know how they feel.

Categorías: Software Libre

Hair System Roadmap

Blender - Lun, 09/08/2014 - 18:06

The Blender hair system will get a number of improvements for the Gooseberry project. Especially the hair dynamics have to be improved and integrated better into the set of artistic tools to allow animators to control and tweak the hair system efficiently. We have a number of goals that should make hair modelling and simulation into a more flexible and helpful tool.

Solver Stability

Animation tools for hair are quite useless without a stable physical solver. Especially for long hairs a physical solver is a valuable tool for generating believable motion. The solver for the simulation has to be very stable, meaning that it produces correct values (no “explosions”) and does not introduce additional motion due to numerical errors (jiggling).

The current solver for the hair dynamics has a number of issues, resulting from conflicts in the mixed cloth/hair model, questionable assumptions in the force model and plain bugs. To avoid these issues the numerical solver implementation will be replaced by a modified Eigen-based solver. Eigen is a library for linear algebra that is already used in Blender and provides a lot of optimizations that would be hard to introduce otherwise.

Numerical Solver Overview (since this is a code blog)

The physical model for hair systems defines each hair as a series of points, connected by “springs”. In addition there are a couple of external influences that have to be accounted for. The physical equations boil down to calculating changes in positions and velocities of these points.

Our solver then has the task of calculating these Δx and Δv so that the result is as close as possible to the actual value. As a first-order approximation and using sensible force models the differential equations can be expressed as a linear system A·Δv = b (See the References section for in-depth information). The algorithm of choice for solving this system is the Conjugate Gradient method. The Eigen library provides a nice set of CG algorithms already.

Unfortunately, for a constrained system such as a hair structure with “pinned” hair root points as well as collision contacts (see below) the basic CG solver is not enough. We need to extend the method somewhat to take constraints into account and limit the degrees-of-freedom in the solution selectively. The paper by Baraff/Witkin describes this modification in detail.

Hair Volume and Friction

Hair and fur coats need a number of features that notoriously difficult to model in a hair simulation: Volume and Friction. “Volume” is the phenomenon where a lot of hairs closely together will push each other away and leave empty space between them (especially curly hair). “Friction” is what makes entangled hair so difficult to comb, because hairs stick together and have lots of surface area.

Both these effects could be naively modeled by hair-hair collisions, but this is prohibitively expensive due to the potential number of collision pairs. A more economical approach is to model the cumulative effect of hairs using a voxel grid. This feature has already been implemented.

Collisions

Collisions are essential for believable simulation results, but so far don’t exist in for hair simulation in Blender (only a volume-based friction model which is a poor replacement).

The first stage in collision handling is to actually detect intersection of hair segments with meshes. This is done in two distinct phases to speed up the process:

  • Broadphase: The hair segment is tested against the bounding boxes of eligible colliders to narrow down the number of pairs. Acceleration structures can speed up the process of finding overlapping pairs.
  • Nearphase: The potential intersection pairs are tested for actual intersection of the detailed geometry.

The detection of collision pairs is currently handled by a BVH tree based structure. In the future it may become advisable to use the Bullet collision detection for finding such pairs, since it has a lot better optimizations for complicated intersection tests and broadphase filtering.

The second stage is to actually make a hair particle react to a collision, so that the hair is prevented from entering the mesh object. A simple approach is to generate a repulsion force which pushes outward from the mesh. However, this force can cause a lot of unwanted motion. The effect is that a hair particle can not stably come to rest on a surface or even the simulation can “explode” when a particle gets trapped in a collider cavity and it’s velocity increases exponentially from repeated collision responses.

A much more elegant and stable approach to handling collision response is to define the contact between a hair and a mesh as a “constraint”: When the hair collides with a surface it’s motion becomes restricted in the direction of the surface normal (while moving tangentially is still possible and desired to relax internal spring forces). An implicit solver can be modified so that collision constraints are taken into account and jittering effects as well as spring instability is largely avoided.

Physics Settings

Settings in the hair dynamics panel need reorganization to be more intuitive and allow easier tweaking. Naming there is currently misleading and as a consequence artists seem to tend to overconstrain the hair system by steadily increasing forces, until eventually the solver gives up and the simulation “explodes”.

The suggested changes would group the dynamics settings into four categories:

  1. Internal Forces: Structural features of the hairs in general (Bending, Stretching, Damping)
  2. Interaction: Friction and Volume Pressure settings, caused by concentrations of hair in the same space
  3. Collision: Bounciness (restitution) and friction of the hair
  4. External Forces: Effect of various force field types on the hair system

To avoid the problem of counterbalancing forces this ordering should suggest a sensible workflow. Starting with the internal forces results in natural behavior of individual hairs. Setting up friction and damping effects secondarily should help avoid the problem of masking extreme forces by equally strong damping, which creates an “explosive” setup that is hard to control.

Each of the categories can be disabled on its own. This also helps to fix issues with either of the influences in case something goes wrong. Otherwise the only way to test the hair dynamics settings is to reset them to zero individually.

Presets could be another simple but effective way to facilitate tweaking. A fine-tuned group of settings can then be stored for later use or to generate variants from.

Guide Hairs

Editing parent hairs on Koro

Physical simulation is only one tool among many in 3D animation production. A major goal for the hair system is to improve tools for artists and combine classic keyframe animation with simulation. The current workflow of the particle hairs gives animators very little control over the simulation beyond the initial setup phase (“grooming”). The results of a simulation never turn out exactly as desired, and so it is very important that animators be able to define corrections to simulation results.

An important concept for simulation control is the rest position of hairs, i.e. the “natural” shape and orientation a hair will be attracted to by the internal bending forces and additional (non-physical) goal spring forces. This rest position is currently defined as a single shape. Defining keyframes for particle system/hair is a clumsy process with a lot of overhead and far from a usable tool. After baking the entire simulation artists can also modify the point cache data, treating the motion of each hair point as a curve, but this is also limited and doesn’t scale well to large hair systems.

Guide Hairs would solve the problem of keyframing the hair rest positions. They are the primary data structure that animators work with, using sculpting/grooming tools and keyframes if necessary. They are roughly equivalent to the current base hair system, although for clarity renaming them is a good idea.

Simulation Hairs form the second data layer in the hair system. They are initially generated from the guide hairs (which also form the sim hairs’ natural rest position). We have to decide how to display and distinguish these layers in the viewport, but it should be clear to artists that these are separate concepts.

Note that there could actually be more simulation hairs than guide hairs! This is an important feature which allows animators to work on a small set of hairs (easy to set up and control), while having more detail in simulations such as colliding with small objects. Generating simulation hairs can use the same interpolation approach as current child hairs.

Render Hairs are the current “child” hairs. They are not stored as permanent data and don’t carry state information of their own. Their purpose is only to generate sufficient visual detail for renderers. Render hairs can incorporate quite a few shaping features of their own, such as randomness, curling or tapering.

Further Reading

“Large Steps in Cloth Simulation” (Baraff/Witkin 1998): Extensive paper on the use of a modified Conjugate Gradient solver for cloth systems, including useful chapters on force derivations, constraints and collisions.

“Simulating Complex Hair with Robust Collision Handling” (Choe/Choi/Ko 2005): Detailed description of a hair collision response model using the CG solver method

“Artistic Simulation of Curly Hair” (Pixar technical paper, “Brave”): Very sophisticated hair model for long curly hair (collisions are too much for our purposes, but the bending model is very nice)

“Volumetric Methods for Simulation and Rendering of Hair” (Pixar technical paper, “The Incredibles”): Describes in detail the volumetric approach to hair-hair friction modeling

Categorías: Diseño 3D

FSF and Debian join forces to help free software users find the hardware they need

FSF - Lun, 09/08/2014 - 17:37

While other databases list hardware that is technically compatible with GNU/Linux, h-node lists hardware as compatible only if it does not require any proprietary software or firmware. Information about hardware that flunks this test is also included, so users know what to avoid. The database lists individual components, like WiFi and video cards, as well as complete notebook systems.

The compatibility information comes from users testing hardware on systems running only free software. Previously, h-node site guidelines required they be running one of the FSF's endorsed distributions. While the FSF does not include Debian on this list because the Debian project provides a repository of nonfree software, the FSF does acknowledge that Debian's main repository, which by default is the only place packages come from, is completely free.

"Unlike other common GNU/Linux distributions, installing official Debian by default means installing only free software. As long as Debian users do not add additional package repositories, their systems are a reliable source of fully free compatibility information. We're looking forward to working with Debian to help free software users get the hardware they need, and encourage the companies who provide it," said FSF's executive director John Sullivan.

"By collaborating with h-node, Debian for the first time has the opportunity to join efforts with other free software communities on the assembly of a database of hardware that doesn't require anything outside the Debian main archive to work properly," said Lucas Nussbaum, Debian Project Leader. "Debian is confident that the fruits of this collaboration will result in the largest curated database of Debian-compatible hardware, and invites all Debian community members to contribute hardware compatibility information to h-node."

H-node was started by Antonio Gallo, who continues to be the project's lead developer. The FSF now provides infrastructure and support. The software powering the site is also distributed as free software under version 3 of the GNU General Public License.

Users can contribute either by running one of the FSF's endorsed distributions, or Debian with only packages from the default main archive installed. Developers and translators can contribute by working on the site's code. Information for getting involved is at http://h-node.org/help/page/en/Help.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA. More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

John Sullivan
Executive Director
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

Lucas Nussbaum
Debian Project Leader
press@debian.org

Categorías: Software Libre

Free Software Foundation adds libreCMC to its list of endorsed distributions

FSF - Jue, 09/04/2014 - 21:50

The FSF's list consists of ready-to-use full systems whose developers have made a commitment to follow the Guidelines for Free System Distributions. This means each distro includes and steers users toward exclusively free software. All distros reject nonfree software, including firmware "blobs," and nonfree documentation.

The wireless network router is a ubiquitous device found in almost every home or business. Virtually all routers on the market today ship with proprietary operating systems. With libreCMC, users can now replace the proprietary operating system on many routers with a 100% free software operating system.

"Today, if you run libreCMC on your home router, you will gain more control over your computing and over the security of your communications. Over time, as a platform designed for and by free software users, we hope libreCMC will make it easy for any user to run their own services, and to remotely access and share files without having to rely upon third-parties," said Joshua Gay, FSF's licensing and compliance manager.

Bob Call, the founder and lead maintainer of libreCMC, said, "The core goals of the libreCMC project are to provide a solid platform that gives users the freedom to control their computing, both in the embedded and large application spaces and eventually in the area of high-performance computing. Right now, libreCMC supports five different versions of routers, as well as the Ben NanoNote. In the future, we hope to expand support to more devices, provide an easy solution for users to host their own services, and pave the way for free software to expand in the embedded world."

The FSF is currently evaluating routers running libreCMC for its Respects Your Freedom hardware certification program.

More info about libreCMC and how to get involved can be found out at http://librecmc.org.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

About the GNU Operating System and Linux

Richard Stallman announced in September 1983 the plan to develop a free software Unix-like operating system called GNU. GNU is the only operating system developed specifically for the sake of users' freedom. See https://www.gnu.org/gnu/the-gnu-project.html.

In 1992, the essential components of GNU were complete, except for one, the kernel. When in 1992 the kernel Linux was re-released under the GNU GPL, making it free software, the combination of GNU and Linux formed a complete free operating system, which made it possible for the first time to run a PC without non-free software. This combination is the GNU/Linux system. For more explanation, see https://www.gnu.org/gnu/gnu-linux-faq.html.

Media Contacts

Joshua Gay
Licensing & Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Bob Call
Founder & Maintainer
LibreCMC
bob@bobcall.me

###

Categorías: Software Libre

GNU hackers unmask massive HACIENDA surveillance program and design a countermeasure

FSF - Vie, 08/22/2014 - 22:59

After making key discoveries about the details of HACIENDA, Julian Kirsch, Dr. Christian Grothoff, Jacob Appelbaum, and Dr. Holger Kenn designed the TCP Stealth system to protect unadvertised servers from port scanning.

According to Heise Online, the intelligence agencies of the United States, Canada, United Kingdom, Australia and New Zealand are involved in HACIENDA. The agencies share the data they collect. The HACIENDA system also hijacks civilian computers, allowing it to leach computing resources and cover its tracks.

Some of the creators of TCP Stealth are also prominent contributors to the GNU Project, a major facet of the free software community and a hub for political and technological action against bulk surveillance. Free software is safer because it is very hard to hide malicious code in a program anyone can read. In proprietary software, there is no way to guarantee that programs don't hide backdoors and other vulnerabilities. The team revealed their work on August 15, 2014 at the annual GNU Hackers' Meeting in Germany, and Julian Kirsch published about it in his master's degree thesis.

Maintainers of Parabola, an FSF-endorsed GNU/Linux distribution, have already implemented TCP Stealth, making Parabola users safer from surveillance. The FSF encourages other operating systems to follow Parabola's lead.

The Free Software Foundation supports and sponsors the GNU Project. FSF campaigns manager Zak Rogoff said, "Every time you use a free software program, you benefit from the work of free software developers inspired by the values of transparency and bottom-up collaboration. But on occassions like these, when our civil liberties are threatened with technological tools, the deep importance of these values becomes obvious. The FSF is proud to support the free software community in its contributions to the resistance against bulk surveillance."

The Free Software Foundation works politically for an end to mass surveillance. Simultaneously, the Foundation advocates for individuals of all technical skill levels to take a variety of actions against bulk surveillance.

About Julian Kirsch, Christian Grothoff, Jacob Appelbaum, and Holger Kenn

Julian Kirsch is the author of "Improved Kernel-Based Port-Knocking in Linux", his Master's Thesis in Informatics at Technische Universitat Munchen.

Dr. Christian Grothoff is the Emmy-Noether research group leader in Computer Science at Technische Universitat Munchen.

Jacob Appelbaum is an American independent computer security researcher and hacker. He was employed by the University of Washington, and is a core member of the Tor project, a free software network designed to provide online anonymity.

Dr. Holger Kenn is a computer scientist specializing in wearable computing, especially software architectures, context sensor systems, human machine interfaces, and wearable-mediated human robot cooperation.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

About the GNU Operating System and Linux

Richard Stallman announced in September 1983 the plan to develop a free software Unix-like operating system called GNU. GNU is the only operating system developed specifically for the sake of users' freedom. See https://www.gnu.org/gnu/the-gnu-project.

In 1992, the essential components of GNU were complete, except for one, the kernel. When in 1992 the kernel Linux was re-released under the GNU GPL, making it free software, the combination of GNU and Linux formed a complete free operating system, which made it possible for the first time to run a PC without non-free software. This combination is the GNU/Linux system. For more explanation, see https://www.gnu.org/gnu/gnu-linux-faq.

Media Contacts

Zak Rogoff
Campaigns Manager
Free Software Foundation
+1-617-542-5942
campaigns@fsf.org

"Knocking down the HACIENDA" by Julian Kirsch, produced by GNU, the GNUnet team, and edited on short notice by Carlo von Lynx from #youbroketheinternet is licensed under a Creative Commons Attribution NoDerivatives 3.0 Unported License.

Categorías: Software Libre

Anamorphic Bokeh

Blender - Jue, 08/21/2014 - 16:42

Cycles allows for photo-realistic rendering. Part of the realism comes from the simulation of photography parameters, such as lens, aperture size, and depth of field. When simulating anamorphic lens, there is something Cycles still miss which is anamorphic bokeh.

Anamorphic Bokeh Perspective Test

Generally speaking “bokeh” is the shape we see from far away blurred light sources. It’s more evident in night shots. When working with anamorphic lens (or when simulating them in Cycles) it’s important to stretch the bokeh according to the simulated lens.

Anamorphic Bokeh Fisheye Test

In a normal close up scene the effect is subtle but gives an extra cinematographic effect. Compare this test-render from the Gooseberry Open Movie. From top to bottom we have a fisheye render, a fisheye render with anamorphic bokeh of 2.0, and fisheye render with anamorphic bokeh of 3.0:

Frank Fisheye Regular Bokeh

Frank Fisheye Anamorphic Bokeh 2.0

Frank Fisheye Anamorphic Bokeh 3.0

Too subtle? Click on the images for a zoom-up version or look closely at the animated comparison:

Anamorphic Bokeh Frank Test

Another shot, now with 1.0 (normal bokeh), 2.0, 3.0 and 10.0.

Frank Bokeh 1.0 Fisheye

Frank Anamorphic Bokeh 2.0 Fisheye

Frank Anamorphic Bokeh 3.0 Fisheye

Frank Anamorphic Bokeh 10.0 Fisheye

In cinema we often see works done with bokeh 1.33, 1.5 or for old movies 2.0. Nothing stops us from simulating other values as we demonstrated here.

Frank Anamorphic Bokeh Fisheye - Animated

This feature is aimed at Blender 2.72, so stay tuned and prepare your night shots. A special thank you for Aldo Zang for the help with the math part of the patch. Test scenes and feature request by Mathieu Auvrey.

Cheers,
Dalai Felinto

Categorías: Diseño 3D

New Game Engine Publishing Addon

Blender - Vie, 06/27/2014 - 08:27

One of the common complaints with the Blender Game Engine is with publishing games. While there are many issues related to publishing with the BGE, one issue is the lack of a simple, user-friendly way to publish to multiple platforms. Steps are being taken to resolve this with a new Game Engine Publishing addon that has been recently committed to master (should be available in buildbot builds by now). This addon is intended to replace the old Save As Runtime addon, and currently provides the following improvements:

  • New panel in the Render Properties to control publishing (this also means publishing options are saved in the blend file)
  • Easier cross-platform publishing (this requires downloading the binaries for the desired platforms, see the addon’s wiki page for more information)
  • Ability to create archives (e.g., tarballs and zips) for published games
  • Ability to automatically copy extra game files (e.g., scripts, unpacked textures, logic, other blend files, etc.) when publishing

Screenshot of the current addon

This addon is still a work in progress, but users are encouraged to start playing with the addon and providing feedback. Some current goals for the addon include:

  • Creating a better way to download needed binaries for publishing to other platforms (the current operator for doing this hangs Blender until it is done downloading, which can take a while)
  • Add an option to compile scripts
  • Add a way to ignore files when copying assets (e.g., __pycache__ folders, *.xcf, *.psd‘s,)

More information about the addon as well as some documentation can be found on the addon’s wiki page.

Categorías: Diseño 3D

US Supreme Court makes the right decision to nix Alice Corp. patent, but more work needed to end software patents for good

FSF - Jue, 06/19/2014 - 23:30

The FSF, Software Freedom Law Center (SFLC), and Open Source Initiative (OSI) had co-filed an amicus curiae brief in the case, stating their position that software on general-purpose computers is not patentable.

"Today's ruling is an important and meaningful step in the right direction, but the Court and Congress must go further," said Zak Rogoff, a campaigns manager at the FSF.

Software patents force software developers, especially those who write free software, to navigate a minefield of spurious legal claims. The number of software patents has ballooned as software companies have scrambled to amass arsenals of patents to threaten each other, as in the recently exposed aggression by Microsoft against Google over smartphone patents.

In the case ruled on today, Alice Corp. had claimed a patent for an unoriginal idea, simply because it was implemented in software to run on a computer.

FSF executive director John Sullivan lauded the Supreme Court for recognizing this: "For years, lawyers have been adding 'on a computer' to the end of abstract idea descriptions to try and turn them into patents, much like kids have been adding 'in bed' to the end of their fortune cookies to try and make new jokes. We're pleased to see the Court reject this attempt and send a signal to others."

For decades, the FSF has argued that it is impossible to solve the problem of software patents by getting individual software patents struck down. The FSF will continue to work for their complete abolition, and participate actively in future legal decisions. Those wishing to become involved in the grassroots movement against software patents can get started with the FSF-hosted End Software Patents project and its prominent wiki. An analysis of the Supreme Court's ruling is currently underway on the wiki and open for public participation.

Sullivan added, "Software patents are a noxious weed that needs to be ripped out by the roots. Too many organizations are clamoring for 'reform,' thinking they can trim the weed into a Bonsai. The FSF is one of the few organizations working for the only real solution. Software on general-purpose computers is not patentable, period."

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

Media Contacts

Zak Rogoff
Campaigns Manager
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

###

Categorías: Software Libre

Tehnoetic, un adaptateur sans fil USB certifié FSF qui respecte votre liberté

FSF - Vie, 06/06/2014 - 22:58
BOSTON, Massachusetts, États-Unis – mercredi 21 mai 2014 – La Free Software Foundation (FSF) a attribué aujourd’hui la certification «Respecte votre liberté» (RYF) à l’adaptateur sans fil USB Tehnoetic TET-N150.
Categorías: Software Libre

To help Reset the Net, FSF launches guide to email protection

FSF - Jue, 06/05/2014 - 16:47
BOSTON, Massachusetts, USA -- Thursday, June 5th, 2014 -- The Free Software Foundation (FSF) today released Email Self-Defense, a how-to guide for setting up and using email encryption.
Categorías: Software Libre

Supporting Game Developers with Blender 2.71

Blender - Mié, 06/04/2014 - 00:19

For the 2.71 release, we’ve been working on improving support for game developers using Blender with external engines. To this end, Bastien Montagne has been working on a new FBX exporter, and I have been evaluating workflows to various external engines. Dalai Felinto has also been hard at work with Cycles baking. Below you’ll find some information on the new goodies you can expect in 2.71 for game developers.

New Binary FBX Exporter

Blender can now export binary FBX files (version 7.4). Some benefits of this new exporter includes:

  • Smaller filesize
  • Quicker exports
  • Custom properties
  • Real materials with textures linked to their properties (specular, diffuse, etc.)
  • Textures can be embedded directly in the FBX file (instead of copying to a sub-folder)
  • New “bake space transform” (only recommended for static models for now, though should work for animated ones too), which allows to get the same rotation values in exported data as in Blender, even though the coordinate system (i.e. up/front axes) do not match. So, for example, your monkey is null-rotated in Blender (-Y, Z) space, with this option, even if you export to Unity (-Z, Y) space, the monkey will still have a null rotation (in other words, the corrective rotation is “baked” into mesh data itself).
  • Writing correct axes and scale data in the FBX file (not many other apps really make use of this info yet)
  • Exporting tangent space data (normal and bitangent vectors) for meshes
  • Baked animation for both bones and objects. Note “baked” animation means that the animation is played in Blender, and all loc/rot/pos are recorded, hence all indirect animation should work, even complex ones based on constraints or drivers (resulting animations are cleaned up, to remove unnecessary keyframes and curves). There are three animation baking modes:
    • When NLA Strip option is enabled, it will export each strip as an anim stack, gathering everything that is animated by this strip. (i.e. can generate animation for several objects).
    • When All Actions option is enabled, it will check each action against each object, and if they match, generate an anim stack *only* affecting that object with that animation.
    • If none of previous options are enabled (or if they do not generate any animation), the whole scene is baked into a single animation stack.

In order to stress-test the new exporter, I got together some assets to create a simple level and a couple of characters. I then tried exporting to Unity, Unreal Development Kit (UDK), and Unreal Engine 4 (UE4). The results are summarized below, for the full notes please check out http://wiki.blender.org/index.php/User:Moguri/ExportDocs.

Unity

What works:

  • Static meshes (including UVs)
  • Skeletal meshes
  • Skeletal mesh animations (including IK constraints)
  • Flat and smooth shading
  • NGons
  • Exporting whole levels
  • Exporting materials (materials will be created in Unity with the name and color of the exported material, and embedded diffuse textures will be setup)

What doesn’t work:

  • Collision Meshes (these have to be setup in Unity)
  • Shape key animations
  • Object animations

Notes:

  • The default scale and axis options work well (Scale: 1.00, Forward: -Z Forward, Up: Y Up). Just make sure to set the scaling in Unity (it defaults to 0.1).

UDK

What works:

  • Static meshes (including UVs)
  • Exporting materials (materials will be created in UDK with the name and color of the exported material, and embedded diffuse textures will be setup)
  • Exporting meshes as collision meshes (they must be named UCX_XX, where XX is the name of the mesh that the collision mesh is for)

What doesn’t work:

  • Exporting whole levels (you’ll need to export individual assets and put them together in UDK)
  • Skeletal meshes and animations (use the PSK/PSA export instead, it is much more reliable)
  • Smooth vs flat shading doesn’t seem to work well
  • Object animations
  • Shape key animations

Notes:

  • Textures must be powers of two and a support image format (JPEG isn’t supported, but PNG is)
  • The default axis options work (Forward: -Z Forward, Up: Y Up), but scaling should be set to 100.0
  • UDK complains about an incompatible FBX version (UDK uses 7.3.0 while Blender exports 7.4.0), but things seem mostly fine
  • Collision meshes can also be created in UDK (i.e., you don’t have to rely on importing a collision mesh)

UE4

What works:

  • Static meshes (including UVs)
  • Skeletal meshes
  • Skeletal mesh animations (including IK constraints)
  • Flat and smooth shading
  • NGons
  • Exporting materials (materials will be created in UE4 with the name and color of the exported material, and embedded diffuse textures will be setup)
  • Exporting meshes as collision meshes (they must be named UCX_XX, where XX is the name of the mesh that the collision mesh is for)

What doesn’t work:

  • Exporting whole levels (you’ll need to export individual assets and put them together in UDK)
  • Shape key animatons
  • Object animations

Notes:

  • Textures must be powers of two and a support image format (JPEG isn’t supported, but PNG is)
  • The default axis options work (Forward: -Z Forward, Up: Y Up), but scaling should be set to 100.0
  • UDK complains about an incompatible FBX version (UE4 uses 7.3.0 while Blender exports 7.4.0), but things seem mostly fine
  • Collision meshes can also be created in UE4 (i.e., you don’t have to rely on importing a collision mesh)

Limitations

One currently known limitation (compared to the ASCII version) is that the binary exporter does not export shape keys, which can be imported into Unity as BlendShapes.

Cycles Baking

2.71 will also include Cycles baking. There was already a full blog post on Cycles baking, which you can find here.

How You Can Help

Grab a build from the buildbot and report issues you are running into to the tracker. You can also make suggestions on the bf-gamedev mailing list for how we can help you improve your workflow.

Categorías: Diseño 3D
Subscribe to Develop Site agregador