Distribuir contenido

Statement in support of Software Freedom Conservancy and Christoph Hellwig, GPL enforcement lawsuit

FSF - Jue, 03/05/2015 - 18:55

On Thursday, March 5, 2015, Christoph Hellwig, with support from the Software Freedom Conservancy, filed suit in Hamburg, Germany against VMware Global, Inc. Hellwig is a prominent contributor to the kernel Linux, releasing his contributions under the terms of the GNU General Public License (GPL) version 2. VMware, like everyone, is free to use, modify, and distribute such software under the GPL, so long as they make available the human-readable source code corresponding to their version of the software when they distribute it.

This simple and fair obligation is the cornerstone of the successful cooperation we've seen for decades between organizations both for-profit and non-profit, users, and developers—the same cooperation which has given us the GNU/Linux operating system and inspired a wealth of free software programs for nearly every imaginable use.

Unfortunately, VMware has broken this promise by not releasing the source code for the version of the operating system kernel they distribute with their ESXi software. Now, after many years of trying to work with VMware amicably, the Software Freedom Conservancy and Hellwig have sought the help of German courts to resolve the matter. While the Free Software Foundation (FSF) is not directly involved in the suit, we support the effort.

"From our conversations with the Software Freedom Conservancy, I know that they have been completely reasonable in their expectations with VMware and have taken all appropriate steps to address this failure before resorting to the courts. Their motivation is to stand up for the rights of computer users and developers worldwide, the very same rights VMware has enjoyed as a distributor of GPL-covered software. The point of the GPL is that nobody can claim those rights and then kick away the ladder to prevent others from also receiving them. We hope VMware will step up and do the right thing," said John Sullivan, FSF's executive director.

The suit and preceding GPL compliance process undertaken by Conservancy mirror the work that the FSF does in its own Licensing and Compliance Lab. Both the FSF and Conservancy take a fair, non-profit approach to GPL enforcement, favoring education and collaboration as a means of helping others properly distribute free software. Lawsuits are always a last resort.

You can support Conservancy's work on this case by making a donation.

Media Contact

John Sullivan
Executive Director
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Categorías: Software Libre

More Dependency Graph Tricks

Blender - Mié, 03/04/2015 - 00:12

The new dependency graph enables several corner cases that were not possible in the old system, In part by making evaluation finer grained – and in part by enabling driving from new datablocks. A nice image to illustrate this is the data block popup in the driver editor:

In the previous image, the highlighted menu item is the only option that is guaranteed to update in current Blender. While testing and development is still very much a work in progress, the future is that all or most of those menu items would become valid driver targets. I’m in progress of testing and submitting to Sergey examples of what works and what doesn’t – this is going to be a moving target until the refactor is complete.

The two examples in this post are based on some of the new working features:

Driving from (shape) key blocks leads to amazing rigging workflow

That weird little icon in the menu above with a cube and key on it that just says ‘Key’ is the shapekey datablock, that stores all the shapekeys in a mesh. And here’s the insanity: you can now use a shapekey to drive something else? Why the heck is that cool, you ask? Well, for starters, it makes setting up correction shapes really, really easy.

Correction shapes here means those extra shapes one makes to make the combination of two other shapes palatable. For instance, if you combine the ‘smile’ and ‘open’ shapes for Proog’s mouth, you get a weird thing that looks almost like a laugh, but not quite, and distorts some of the vertices in an unphysical way. The typical solution is to create a third shape ‘smile+open’ that tweaks those errors and perfects the laughing shape. The great thing about the new depsgraph, is you can drive this shape directly from the other two effectively making a ‘smart’ mesh that behaves well regardless of how it is rigged. If you are curious about this, check out the workflow video below:

Finer Granularity Dependency Graph Tricks

The finer granularity of the Dependency graph lets us work around potential dependency cycles that would trip up the old object based system and make usable rig setups. Once such setup is at least sometimes called the ‘Dorito Method’ for reasons I have not been able to discern.
The goal of the setup is to deform the mesh using shapekeys, and then further enable small tweaks with deforming controls – an armature. The trick is, to make these controls ‘ride’ with the mesh + shapekeys, effectively a cycle (mesh->bone->mesh) but not really, because the first ‘mesh’ in that sequence is only deformed with shapekeys.
The trick to fix the above cycle is to duplicate the meshes: (mesh1->bone->mesh2) where mesh1 has the shapekeys and mesh2 is deformed by the bone. The sneaky bit is that both mesh objects are linked meshes, so they share the shapekey block.
The problem with blender before the dependency refactor, is that everything works *except* driving the shapes and the deforms from the same armature. This was due to the object only limitation of the dependency graph. Now that we have finer granularity (at least in the depsgraph_refactor branch) this problem is completely solved!

Since this is a tricky method, I’ve got some more documentation about it after the jump

  1. The above image is an exploded view; in the blend, all three objects (the rig and the two meshes) would be in the same location.
  2. The two meshes are linked-data objects. They share the same shapekeys, hence the same shapkey drivers.
  3. The bone on the right has a custom property that drives the shapekeys, deforming both meshes.
  4. The larger green bone and the square-shaped bone deform the topmost mesh via an armature deform
  5. The lower green bone copies the location of a vertex in the original mesh (child-of would be even more forgiving) This is not a cycle since the lower mesh is not deformed by the armature.
  6. The visible red control is a child of that bone
  7. The larger green bone (the deformer) has a local copy location to the visible red control

This could be simplified somewhat by adding a child of constraint directly to the controller (targeting the original mesh shapekey) but I prefer not to constrain animator controls.
If you were to attempt this in 2.73 or the upcoming 2.74 it would fail to update reliably unless you split out the bone that drives the shapekey into its own armature object. This has to do with the course-grained dependency graph in 2.74, which only looks at entire objects. The downside (and the upside of the dependency graph) is that you would end up with two actions for animating your character instead of one (bleh) or you might have difficulties with proxies and linked groups.
Some reference links below:

Further thoughts

If we had some kind of hypothetical “Everything Nodes” we could implement this kind of setup without duplicating the mesh, indeed, without having redundant parent and child bones – the 3D setup would be quite simple, and the node setup would be less hackish and more clear about why this is not a dependency. I’ve made a hypothetical ‘everything nodes’ setup below, to illustrate what the dependencies actually are. In a real system, it’s quite likely you’d represent this with two node trees, one for the rig object, and one for the actual mesh deformation.

Categorías: Diseño 3D

Animation System Roadmap – 2015 Edition

Blender - Mar, 03/03/2015 - 12:22

Hi there! It’s probably time to make this somewhat official:

Here is a selection of the most pressing “big ticket” animation related developments currently on my todo list. Do note that this is not an exhaustive list (for which there are many other items), but it does contain all the main things that I’m most aware of.

(This is cross-posted from my original post: http://aligorith.blogspot.co.nz/2015/03/animation-system-roadmap.html)

High Priority NLA

* Local Strip Curves – Keyframing strip properties (e.g. time and influence) currently doesn’t update correctly.     [2.75]

Quite frankly, I’m surprised the current situation seems to work as well as it has, because the original intention here (and only real way to solve it properly) is to have dedicated FCurves which get evaluated before the rest of the animation is handled.

I’ve got a branch with this functionality working already – all that’s missing is code to display those FCurves somewhere so that they can be edited (and without being confused for FCurves in the active actions instead). That said, the core parts of this functionality are now solid and back under control in the way it was originally intended.

I originally wanted to get this polished and into master for 2.74 – definitely before Gooseberry start trying to animate, as I know that previous open movie projects did end up using the NLA strip times for stuff (i.e. dragon wings when flying), and the inclusion of this change will be somewhat backwards incompatible (i.e. the data structures are all still there – nothing changed on that front, but there were some bugs in the old version which means that even putting aside the fact you can’t insert keyframes where they’re actually needed, the animations wouldn’t actually get evaluated correctly!).

On a related note – the bug report regarding the renaming NLA strips not updating the RNA Paths: that is a “won’t fix”, as that way of keyframing these properties (that is used in master) was never the correct solution. This fix will just simply blow it all away, so no point piling another hack-fix on top of it all.

* Reference/Rest Track and Animation Layers Support  [2.76]

This one touches on two big issues. Firstly, there’s the bug where, if not all keyframed properties are affected by every strip (or at least set to some sane value by a “reference” strip), you will get incorrect poses when using renderfarms or jumping around the timeline in a non-linear way.

On another front, the keyframing on top of existing layers (i.e. “Animation Layers”) support doesn’t work well yet, because keyframing records the combined value of the stack + the delta-changes applied by the active action that you’re keying into. For this to work correctly, the contributions of the NLA stack must be able to be removed from the result, leaving only the delta changes, thus meaning that the new strip will be accumulated properly.

So, the current plan here is that an explicit “Reference Pose” track will get added to the bottom of NLA stacks. It will always be present, and should include every single property which gets animated in the NLA stack, along with what value(s) those properties should default to in the absence of any contributions from NLA strips.

Alongside this reference track, all the “NlaEvalChannels” will be permanently stored (during runtime only; they won’t get saved to the file) instead of being recreated from scratch each time. They will also get initialised from the Reference Track. Then, this allows the keyframing tools to quickly look up the NLA stack result when doing keyframing, thus avoiding the problems previously faced.

* A better way to retime a large number of strips [2.76/7]

It’s true that the current presentation of strips is not exactly the most compact of representations. To make it easier to retime a large number of strips (i.e. where you might want them to be staggered across a large number of objects, we may need to consider having something like a summary-track in the dopesheet. Failing that, we could just have an alternative display mode which compacts these down for this usecase.

Action Management [2.74, 2.75]

See the Action Management post. The priority of this ended up being bumped up, displacing the NLA fixes from 2.74 (i.e. Local Strip Keyframes) and 2.75 (i.e. Reference Track Support) back by 1-2 releases.

There are also a few related things which were not mentioned in that post (as they did not fit):

* Have some way of specifying which “level” the “Action Editor” mode works on.

Currently, it is strictly limited to the object-level animation of the active object. Nothing else. This may be a source of some of the confusion and myths out there…  (Surely the fact that the icon for this mode uses the Object “cube” is a bit of a hint that something’s up here!)

* Utilities for switching between Dopesheet and NLA.

As mentioned in the Action Management post, there are some things which can be done to make the relationship between these closer, to make stashing and layering workflows nicer.

Also in question would be how to include the Graph Editor in there somehow too… (well, maybe not between the NLA, but at least with the Dopesheet)

*  “Separate Curves” operator to split off FCurves into another action

The main point of this is to split off some unchanging bones from an action containing only moving parts. It also paves the way for other stuff like take an animation made for grouped objects back to working on individual objects.

Animation Editors

* Right-click menus in the Channels List for useful operations on those [2.75]

This should be a relatively simple and easy thing to do (especially if you know what to do). So, it should be easy to slot this in at some point.

* Properties Region for the Action Editor   [2.76]

So, at some point recently, I realised that we probably need to give the Action Editor a dedicated properties region too to deal with things like groups and also the NLA/AnimData/libraries stuff. Creating the actual region is not really that difficult. Again it boils down to time to slot this in, and then figuring out what to put in there.

* Grease Pencil integration into normal Dopesheet [2.76]

As mentioned in the Grease Pencil roadmap, I’ve got some work in progress to include Grease Pencil sketch-frames in the normal dopesheet mode too. The problem is that this touches almost every action editor operator, which needs to be checked to make sure it doesn’t take the lazy road out by only catering for keyframes in an either/or situation. Scheduling this to minimise conflicts with other changes is the main issue here, as well as the simple fact that again, this is not “simple” work you can do when half-distracted by other stuff.

Bone Naming  [2.77]

The current way that bones get named when they are created (i.e. by appending and incrementing the “.xyz” numbers after their names) is quite crappy, and ends up creating a lot of work if duplicating chains like fingers or limbs. That is because you now have to go through, removing these .xyz (or changing them back down to the .001 and .002 versions) before changing the action things which should change (i.e. Finger1.001.L should become Finger2.001.L instead of Finger1.004.L or Finger1.001.L.001).

Since different riggers have different conventions, and this functionality needs to work with the “auto-side” tool as well as just doing the right thing in general, my current idea here is to give each Armature Datablock a “Naming Pattern” settings block. This would allow riggers to specify how the different parts of each name behave.

For example, [Base Name][Chain Number %d][Segment Letter][Separator '.'][Side LetterUpper] would correspond to “Finger2a.L”. With this in place, the “duplicate” tool would know that if should increment the chain number/letter (if just a single chain, while perhaps preparing for flipping the entire side if it’s more of a tree), while leaving the segment alone. Or the “extrude” tool would know to increment the segment number/letter while leaving the chain number alone (and not creating any extra gunk on the end that needs to be cleaned up). The exact specifics though would need to be worked out to make this work well.

Drivers

* Build a dedicated “Safe Python Subset” expression engine for running standard driver expressions to avoid the AutoRun issues

I believe that the majority of driver expressions can be run without full Python interpreter support, and that the subset of Python needed to support the kinds of basic math equations that the majority of such driver expressions use is a very well defined/small set of things.

This set is small enough that we can in fact implement our own little engine for it, with the benefit that it could probably avoid most of the Python overheads as a result, while also being safe from the security risks of having a high-powered turing-complete interpreter powering it. Other benefits here are that this technique would not suffer from GIL issues (which will help in the new depsgraph; oddly, this hasn’t been a problem so far, but I’d be surprised if it doesn’t crop up its ugly head at the worst possible moment of production at some point).

In the case where it cannot in fact handle the expression, it can then just turf it over to the full Python interpreter instead. In such cases, the security limiting would still apply, as “there be dragons”. But, for the kinds of nice + simple driver expressions we expect/want people to use, this engine should be more than ample to cope.

So, what defines a “nice and simple” driver expression?

- The only functions which can be used are builtin math functions (and not any arbitrary user-defined ones in a script in the file; i.e. only things like sin, cos, abs, … would be allowed)

- The only variables/identifiers/input data it can use are the Driver Variables that are defined for that driver. Basically, what I’ve been insisting that people use when using drivers.

- The only “operators” allowed are the usual arithmetic operations: +, -, *, /, **, %

What makes a “bad” (or unsafe) driver expression?

- Anything that tries to access anything using any level of indirection. So, this rules out all the naughty “bpy.data[...]…” accesses and “bpy.context.blah” that people still try to use, despite now being blasted with warnings about it. This limitation is also in place for a good reason – these sorts of things are behind almost all the Python exploits I’ve seen discussed, and implementing such support would just complicate and bloat out little engine

- Anything that tries to do list/dictionary indexing, or uses lists/dictionaries. There aren’t many good reasons to be doing this (EDIT: perhaps randomly chosing an item from a set might count. In that case, maybe we should restrict these to being “single-level” indexing instead?).

- Anything that calls out to a user-defined function elsewhere. This is inherent risk here, in that that code could do literally anything

- Expressions which try to import any other modules, or load files, or crazy stuff like that. There is no excuse… Those should just be red-flagged whatever the backend involved, and/or nuked on the spot when we detect this.

* A modal “eyedropper” tool to set up common “garden variety” 1-1 drivers

With the introduction of the eyedropped tools to find datablocks and other stuff, a precedent has been set in our UI, and it should now be safe to include similar things for adding a driver between two properties. There are of course some complications which arise from the operator/UI code mechanics last time I tried this, but putting this in place should make it easier for most cases to be done.

* Support for non-numeric properties

Back when I initially set up the animation system, I couldn’t figure out what to do with things like strings and pointers to coerce them into a form that could work with animation curves. Even now, I’m not sure how this could be done. That said, while writing this, I had the though that perhaps we could just use the same technique used for Grease Pencil frames?

Constraints

* Rotation and Scale Handling

Instead of trying to infer the rotation and scale from the 4×4 matrices (and failing), we would instead pass down “reference rotation” and “reference scale” values alongside the 4×4 matrix during the evaluation process. Anytime anything needs to extract a rotation or scale from the matrix, it has to adjust that to match the reference transforms (i.e. for rotations, this does the whole “make compatible euler” stuff to get them up to the right cycle, while for scale, this just means setting the signs of the scale factors). If however the rotation/scale gets changed by the constraint, it must also update those to be whatever it is basing its stuff from.

These measures should be enough to combat the limitations currently faced with constraints. Will it result in really ugly code? Hell yeah! Will it break stuff? Quite possibly. Will it make it harder to implement any constraints going forth? Absolutely. But will it work for users? I hope so!

Rigging

It’s probably time that we got a “Rigging Dashboard” or similar…

Perhaps the hardest thing in trying to track down issues in the rigs being put out by guys like JP and cessen these days are that they are so complex (with multiple layers of helper bones + constraints + parenting + drivers scattered all over) to figure out where exactly to start, or which set of rigging components interact to create a particular result.

Simply saying “nodify everything” doesn’t work either. Yes, it’s all in one place now, but then you’ve got the problem of a giant honking graph that isn’t particularly nice to navigate (large graph navigation in and of itself is another interesting topic for another time and date).

Key things that we can get from having such a dashboard are:

1) Identifying cycles easier, and being able to fix them

2) Identifying dead/broken drivers/constraints

3) Isolating particular control chains to inspect them, with everything needed presented in one place (i.e. on a well designed “workbench” for this stuff)

4) Performance analysis tools to figure out which parts of your rig are slow, so that you can look into fixing that.

Medium Priority NLA

* A better way of flattening the stack, with fewer keyframes created

In many cases, it is possible to flatten the NLA without baking out each frame. This only really applies when there are no overlaps, where the keyframes can simply be transposed “as is”. When they do interact though, there may be possibilities to combine these in a smarter way. In the worst case, we can just combine by baking.

* Return of special handling for Quaternions?

I’m currently pondering whether we’ll need to reinstate special handling for quaternion properties, to keep things sane when blending.

* Unit tests for the whole time-mapping math

I’ve been meaning to do this, but I haven’t been able to get the gtests framework to work with my build system yet… If there ever wee a model example of where these things come in handy, it is this!

Animation Editors

* Expose the Animation Channel Filtering API to Python

Every time I see the addons that someone has written for dealing with animation data, I’m admittedly a bit saddened that they do things like explicitly digging into the active object only, and probably only caring about certain properties in there. Let’s just say, “been there done that”… that was what was done in the old 2.42/3 code, before I cleaned it up around 2.43/2.44, as it was starting to become such a pain to maintain it all (i.e. each time a new toggle or datatype was added, ALL the tools needed to be recoded).

These days, all the animation editors do in fact use a nice C API for all things channels-related. Some of it pre-dates the RNA system, so it could be said that there are some overlaps. Then again, this one is specialised for writing animation tools and drawing animation editors, while RNA is generic data access – no comparison basically.

So, this will happen at some point, but it’s not really an urgent/blocking issue for anything AFAIK.

* To support the filtering API, we need a way of setting up or supplying some more general filtering settings that can be used everywhere where there aren’t any the dopesheet filtering options already

The main reason why all the animation editor operators refuse to work outside of those editors is that they require the dopesheet filtering options (i.e. those toggles on the header for each datablock, and other things) to control what they are able to see and affect. If we have some way of passing such data to operators which need it in other contexts (as a fallback), this opens the way up for stuff like being able to edit stuff in the timeline.

As you’ll hopefully be well aware, I’m extremely wary of any requests to add editing functionality to the timeline. On day one, it’ll just be “can we click to select keyframes, and then move them around”, and then before long, it’s “can we apply interpolation/extrapolation/handle types/etc. etc.” As a result, I do not consider it viable to specifically add any editing functionality there. If there is editing functionality for the timeline, it’ll have to be borrowed from elsewhere!

Action Editor/Graph Editor

* Add/Remove Time

Personally I don’t understand the appeal of this request (maybe it’s a Maya thing), but nonetheless, it’s been on my radar/list as something that can be done. The only question is this: is it expected that keyframes should be added to enact a hold when this happens, or is this simply expanding and contracting the space between keyframes.

* Make breakdown keyframes move relative to the main keyframes

In general, this is simple, up until the keyframes start moving over each other. At that point, it’s not clear how to get ourselves out of that pickle…

Small FCurve/Driver/etc. Tweaks

* Copy Driver Variables

* Operators to remove all FModifiers

Motion Capture Data

* A better tool for simplifying dense motion curves

I’ve been helping a fellow kiwi work on getting his curve simplifying algorithm into Blender. So far, its main weakness is that it is quite slow (it runs in exponential time, which sucks  on longer timelines) but has guarantees of “optimal” behaviour. We also need to find some way to estimate the optimal parameters, so that users don’t have to spend a lot of time testing different combinations (why is not going to be very nice, given the non-interactive nature of this).

Feel free to try compiling this and give it a good test on a larger number of files and let us know how you go!

* Editing tools for FSamples

FSamples were designed explicitly for the problem of tackling motion capture data, and should be more suited to this than the heavier keyframes.

Keying Sets

* Better reporting of errors

The somewhat vague “Invalid context” error for Keying Sets comes about because there isn’t a nice way to pipe more diagnostic information in and out of the Keying Sets callbacks which can provide us with that information. It’s a relatively small change, but may be better with

Pose Libraries

* Internal code cleanups to split out the Pose Library API from the Pose Library operators

These used to be able to serve both purposes, but the 2.5 conversion meant that they were quickly converted over to opertator-only to save time. But, this is becoming a bottleneck for other stuff

* Provide Outliner support for Pose Library ops

There’s a patch in the tracker, but this went about this in the wrong way (i.e. by duplicating the code into the outliner). If we get that issue out of the way, this is relatively trivial

* Pose Blending

Perhaps the biggest upgrade that can be made is to retrofit a different way of applying the poses, to be one which can blend between the values in the action and the current values on the rig. Such functionality does somewhat exist already (for the Pose Sliding tools), but we would need to adapt/duplicate this to get the desired functionality. More investigation needed, but it will happen eventually.

* Store thumbnails for Poses + Use the popup gallery (i.e. used for brushes) to for selecting poses

I didn’t originally do this, as at the time I thought that these sorts of grids weren’t terribly effective (I’ve since come around on this, after reading more about this stuff) and that it would be much nicer if we could actually preview how the pose would apply in 3D to better evaluate how well it fits for the current pose (than if you only had a 2D image to work off). The original intent was also to have a fancy 3D gallery, where scrolling through the gallery would swing/slide the alternatively posed meshes in from the sides.

Knowing what I know now, I think it’s time we used such a grid as one of the way to interact with this tool. Probably the best way would be to make it possible to attach arbitrary image datablocks to Pose Markers (allowing for example the ability to write custom annotations – i.e. what phenoms  a mouth space refers to), and to provide some operators for creating these thumbnails from the viewport (i.e. by drawing a region to use).

Fun/Useful but Technically Difficult

There are also a bunch of requests I’d like to indulge, and indeed I’ve wanted to work on them for years. However, these also come with a non-insignificant amount of baggage which means that they’re unlikely to show up soon.

Onionskinning of Meshes

Truth be told, I wanted to do this back in 2010, around the time I first got my hands on a copy of Richard William’s book. The problem though was and remains that of maintaining adequate viewport/update performance.

The most expensive part of the problem is that we need to have the depsgraph (working on local copies of data, and in a separate thread) stuff in place before we can consider implementing this. Even then, we’ll also need to include some point caching stuff (e.g. Alembic) to get sufficient performance to consider this seriously.

Editable Motion Paths

This one actually falls into the “even harder” basket, as it actually involves 3-different “hard” problems:

1) Improved depsgraph so that we can have selective updates of only the stuff that changes, and also notify all the relationships appropriately

2) Solving the IK problem (i.e. changed spline points -> changed joint positions -> local-space transform properties with everything applied so that it works when propagated through the constraints ok). I tried solving this particular problem 3 years ago, and ran into many different little quirky corner cases where it would randomly bug/spazz out, flipping and popping, or simply not going where it needs to go because the constraints exhibit non-linear behaviour and interpret the results differently.  This particular problem is one which affects all the other fun techniques I’d like to use for posing stuff, so we may have to solve this once and for all with an official API for doing this. (And judging from the problems faced by the authors of various addons – including the current editable motion paths addon, and also the even greater difficulties faced by the author of the Animat on-mesh tools, it is very much a tricky beast to tame)

3) Solving the UI issues with providing widgets for doing this.

Next-Generation Posing Tools

Finally we get to this one. Truth be told, this is the project I’ve actually been itching to work on for the past 3 years, but have had to put off for various reasons (i.e. to work on critical infrastructure fixes and also for uni work). It is also somewhat dependent on being able to solve the IK problem here (which is a recurring source of grief if we don’t do it right).

If you dig around hard enough, you can probably guess what some of these are (from demos I’ve posted and also things I written in various places). The short description though is that, if this finally works in the way I intend, we’ll finally have an interface that lets us capture the effortless flow, elegance, and power of traditional animating greats like Glen Keane or Eric Goldberg – for having a computer interface that allows that kind of fluid interaction is one my greatest research interests.

Closing Words

Looking through this list, it looks like we’ve got enough here for at least another 2-3 years of fun times

Categorías: Diseño 3D

Blender Dependency Graph Branch for users

Blender - Vie, 02/20/2015 - 05:59

Hello! I’m visiting here to talk about work being done by Sergey, Joshua, Lukas and others updating Blender’s dependency graph. Anyone can test it by building the depsgraph_refactor branch from git.

How?

To make things interesting I’m testing on Elephants Dream files. To do this, I also have to update the project to work in post 2.5 blender ! This has the effect of exposing bugs/ todos in the branch by exposing it to a large set of working files, that have to match their previous known behavior. As a side effect, Blender Cloud subscribers and others should gain access to an updated Elephants Dream, and we’ll have a couple of new addons to update old files, and to create walk cycles on paths. Not to be stuck on old things, I’m also creating some useful rigs that are impossible without the refactor.

But, what is it?

Well, what is this ‘depsgraph’ anyway, and why does it need updating? Simply put, without a depsgraph, you would not be able to have things like constraints, drivers or modifiers or even simple object parenting working in a reliable way. As we make our complicated networks of relationships, Blender internally builds a “A depends on B depends on C” type of network, that looks very much like a compositing node network. With this network, and for each frame, blender knows to update A before it updates B before it updates C. This is how, for instance, Child objects can inherit their parents’ transforms before updating themselves.

Why is it being updated?

The current Dependency graph was written during Elephants Dream (haha! the circle is complete). This is way before the modern animation system of ‘everything can be animated’ we have now. That design really worked for the rigid old system, in which only specific properties could be animated. Starting from 2.5 and until now, only dependencies that worked in 2.4x could reliably be expected to work, even though the interface allows you to create them. Think of driving a bone’s transform with another bone’s transform in the same rig, or parenting an empty to the body of a character, then IK’ing the arm to that empty, or trying to get a flower to open up based on the brightness of the sun lamp…. Even worse, the interface fully allows you to set up these drivers, but after you do, you get some strange lags and stutters, with very limited feedback as to why this happens. Previous patches enabled some very specific new setups, while not really changing the system under the hood. With the update, we can expect these setups and more to work, in a predictable and speedy way. This also lays the groundwork for future changes in blender, such as creating a new node system for modifiers + constraints + transforms + particles, basically enabling more procedural-ism and flexible rigging. For now, in addition to “Animate all the things’ we will be able to “Drive all the things” – very cool.

Introducing Dr. Dream


It turns out old Elephants Dream files *almost* work in 2.5 – 2.7, with the following exceptions:

  1. Action Constraints in Proog and Emo had ‘wrong angles’ due to a bug in the old constraint. Since it got fixed, these numbers have to be updated.
  2. Shapekey drivers have different data-paths, reference shapekeys by number instead of by names, and making driven shapes broken.
  3. We used an old NLA feature that allows putting groups in the NLA and having strips refer to the rig inside the groups. This feature was removed during the animation system recode, and all that animation just stopped working – this is mainly true for all the robotic ducks in the background of shots.
  4. Another (terrible!) feature was the whole stride bone offsetting for walkcycles, that allowed for characters walking on paths. It was cumbersome to set up and resulted in much sliding of feet, and thus was never recoded in the new animation system. Which means all our walking-on-paths characters don’t walk anymore.
  5. Some cyclical dependencies (Empty -> Armature -> Empty again) cause bad/laggy evaluation. We simply got away with this in the few shots that it happens, but it is not guaranteed to ever render correctly again (even on 2.4!!!)
  6. Proog, Emo and animated characters are local in each shot, meaning fixes have to happen in every file.

To solve problem 1 – 3 I wrote an addon called Dr Dream – an inside joke we used to call many Elephants Dream scripts ‘Dr’ something, and because this Dr. is actually helping the patient work in new blenders. Dr Dream also handles problem number 6 – being a script, it can be run in every file, fixing the local characters.

To solve problem 5 I will do the following: Nothing. The depsgraph refactor will take care of this for me!!!!

Problem 4 requires coding a python solution, this is a big project, and will be the subject of future post.

New Setup: soft IK


I’ll do a series of posts on useful rigging tricks possible in depsgraph_refactor. This current one is possible to add into existing and animated rigs – even Elephants Dream ones – and was not possible before the refactor, because it relies on driving the transformation of one bone by another in the same armature object. Some of the animators among you may have noticed a problem when animating IK legs: as the legs go from bent to straight (and sometimes bent again, like during a walk), the knees appear to ‘pop’ in a distracting way. The reason turns out to be simple math: as the chain straightens, the velocity of the knee increases (in theory to infinity) causing the knee to pop at those frames. There’s a couple of excellent blog posts about the math and theory behind this here and here, and an old blog about in blender here.
If you want to check out the blend file in that video, you can download the blend here. Note that I’ve exaggerated the soft distance, it really works fine at 0.01 or less; you can edit the number in line 6 of lengthgetter.py, and then just rerun the script to see the effect. Too high a value (what I have) can make the character seem very bent-legged.

Categorías: Diseño 3D

FSF adds Guix System Distribution to list of endorsed distributions

FSF - Mar, 02/03/2015 - 17:15

The FSF's list consists of ready-to-use full GNU/Linux systems whose developers have made a commitment to follow the Guidelines for Free System Distributions. This means each distro includes and steers users toward exclusively free software. All distros on this list reject nonfree software, including firmware "blobs" and nonfree documentation. The Guix System Distribution is a new and growing distro that currently ships with just over 1000 packages, already including almost all of the programs available from the GNU Project.

As the name suggests, at the heart of the Guix System Distribution is the GNU Guix (pronounced like "geeks") package management system. GNU Guix offers users uncommon features such as transactional upgrades and rollbacks, as well as declarative operating system configuration.

"The Guix System Distribution is a flexible, cutting edge, and bare bones distro ideally suited for experienced users. However, both the distro and the GNU Guix package management system itself have an active and welcoming community of contributors. I look forward to watching this project mature and encourage people to get involved," said Joshua Gay, FSF's licensing and compliance manager.

"The goal of GNU Guix is to bring the GNU system, as was envisioned 31 years ago, and to transcribe its ethical goals in the implementation. For example, functional package management means that Guix provides the complete 'Corresponding Source' of its packages, in the sense of the GNU GPL -- users know precisely how a binary package was obtained. Unprivileged users can install packages, and the whole system is customizable and hackable, à la Emacs. We hope to help federate GNU hackers and computing freedom supporters around the project. It's ambitious, but because it can help strengthen GNU and defend user freedom, I think it's worth it," said Ludovic Courtès, lead maintainer of GNU Guix.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About the GNU Operating System and Linux

Richard Stallman announced in September 1983 the plan to develop a free software Unix-like operating system called GNU. GNU is the only operating system developed specifically for the sake of users' freedom. See https://www.gnu.org/gnu/the-gnu-project.html.

In 1992, the essential components of GNU were complete, except for one, the kernel. When in 1992 the kernel Linux was re-released under the GNU GPL, making it free software, the combination of GNU and Linux formed a complete free operating system, which made it possible for the first time to run a PC without non-free software. This combination is the GNU/Linux system. For more explanation, see https://www.gnu.org/gnu/gnu-linux-faq.html.

Media Contacts

Joshua Gay
Licensing & Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Categorías: Software Libre

Libreboot X200 laptop now FSF-certified to respect your freedom

FSF - Jue, 01/29/2015 - 22:25

This is the second Libreboot laptop from Gluglug (a project of Minifree, Ltd.) to achieve RYF certification, the first being the Libreboot X60 in December 2013. The Libreboot X200 offers many improvements over the Libreboot X60, including a faster CPU, faster graphics, 64-bit GNU/Linux support (on all models), support for more RAM, higher screen resolution, and more. The Libreboot X200 can be purchased from Gluglug at http://shop.gluglug.org.uk/product/libreboot-x200/.

The Libreboot X200 is a refurbished and updated laptop based on the Lenovo ThinkPad X200. In order to produce a laptop that achieved the Free Software Foundation's certification guidelines, the developers at Gluglug had to replace the low-level firmware as well as the operating system. Microsoft Windows was replaced with the FSF-endorsed Trisquel GNU/Linux operating system, which includes the GNOME 3 desktop environment. The free software boot system of Libreboot and the GNU GRUB 2 bootloader were adapted to replace the stock proprietary firmware, which included a BIOS, Intel's Management Engine system, and Intel's Active Management Technology (AMT) firmware.

The FSF has previously written about Intel's ME and AMT, calling attention to how this proprietary software introduces a fundamental security flaw -- a back door -- into a person's machine that allows a perpetrator to remotely access the computer over a network. It enables powering the computer on and off, configuring and upgrading the BIOS, wiping the hard drives, reinstalling the operating system, and more. While there is a BIOS option to ostensibly disable AMT, because the BIOS itself is proprietary, the user has no means to verify whether this is sufficient. The functionality provided by the ME/AMT could be a very useful security and recovery measure, but only if the user has control over the software and the ability to install modified versions of it.

"The ME and its extension, AMT, are serious security issues on modern Intel hardware and one of the main obstacles preventing most Intel based systems from being liberated by users. On most systems, it is extremely difficult to remove, and nearly impossible to replace. Libreboot X200 is the first system where it has actually been removed, permanently," said Gluglug Founder and CEO, Francis Rowe.

"This is a huge accomplishment, but unfortunately, it is not known if the work they have done to remove the ME and AMT from this device will be applicable to newer Intel-based laptops. It is incredibly frustrating to think that free software developers may have to invest even more time and energy into figuring out how to simply remove proprietary firmware without rendering the hardware nonfunctional. On top of that, the firmware in question poses a serious security threat to its users -- and the organizations who employ them. We call on Intel to work with us to enable removal of ME and AMT for users who don't want it on their machines," said FSF's executive director, John Sullivan.

In order to remove the ME, AMT, and other proprietary firmware from the laptop, the Libreboot developers had to first reverse engineer Intel's firmware. They then created a small software utility to produce a free firmware image that conforms to Intel's specifications. Finally, to install their firmware on the device, they used special hardware (an SPI flasher) that they directly connected to a small chip on the motherboard itself. After many months of work, the Libreboot developers managed to completely overwrite the proprietary firmware with Libreboot and GNU GRUB 2. Those who purchase a Libreboot X200 from Gluglug will receive a laptop that has had all of this work already done to it and will be able to update or install new firmware to their device without needing to make use of any special hardware or complicated procedures.

To learn more about the Respects Your Freedom hardware certification, including details on the certification of the Libreboot X200, visit http://www.fsf.org/ryf. Hardware sellers interested in applying for certification can consult http://www.fsf.org/resources/hw/endorsement/criteria.

Subscribers to the FSF's Free Software Supporter newsletter will receive announcements about future Respects Your Freedom products.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About Gluglug and Minifree, Ltd

Francis Rowe is the Founder and CEO of Minifree Ltd in the UK, which owns and operates Gluglug, a project to promote adoption of free software globally. To purchase products sold by Gluglug, visit http://shop.gluglug.org.uk.

Media Contacts

Joshua Gay
Licensing & Compliance Manager
Free Software Foundation
+1 (617) 542 5942
licensing@fsf.org

Francis Rowe
Founder & CEO
Gluglug
info@gluglug.org.uk

Categorías: Software Libre

Assets – FileBrowser Preliminary Work – Experimental Build I

Blender - Dom, 01/18/2015 - 20:49

So, as some of you may know already, since December 2014 and my three weeks spent in Amsterdam at BI, I’ve started working on the asset topic.

An 'Append' file browser with two blender files showing all their materials, objects and textures. Note the renamed bookmark on the left too.

So far, I did not do anything really directly related to assets (except early designing) – rather, I’ve been improving the editor I intend to use later for assets handling, i.e. the FileBrowser. I have already ported some cleanup/refactor and some minor features (like search field in header, operator to enforce mat/tex/etc. previews generation, …) in master, but most work is still in the dedicated ‘assets-experiments’ git branch, so I’ve made some experimental builds to (try to ) get a bit of testing. In those builds you’ll find:

  • Bookmarks and co using UIlists, with possibility to rename and reorganize bookmarks, plus the default features of UILists (filtering by name, and some sorting).
  • Possibility to list the whole content of a blend file (in append/link mode) at once (set ‘Recursion Level’ setting to 1), and in any mode, to list several levels of the directory tree in a “flat” mode (including blend files content if relevant, set ‘Recursion Level’ to 2 or more).
  • Consequently, possibility to append/link at once several items, either from a same .blend lib, or even from different ones.
  • Also, filtering by datablock types was added (so that you can see only e.g. materials and textures from all .blend libs from a same directory…).
  • Previews were added to object and group datablocks. Generation of those is handled by a python script (note: only handles BI renderer in this build, Cycles is yet to be added).

Note about previews of datablocks like materials, textures, etc., you have to manually generate them (from File -> Data Previews main menu), and then save the .blend file. On the other side, preview generation of objects and groups work with separated automated tasks ran on selected .blend files (which should rather not be opened at that time). This is quite inconsistent and shall be fixed for sure! On a more technical aspect (though it can also have effects on user PoV):

  • Directory listing is now also a background job (like thumbnail generation of images and .blend files), which means listing huge directories, or remote ones, does not lock the UI anymore.
  • Previews of datablocks (IDs) are now exposed in RNA, so third party scripts will also be able to generate their own previews if needed. Not all ID types have previews though (only object, group, material, lamp, world, texture and image ones currently), this is likely to change though.

So, as usual, any feedback is more than welcome! Note too that Windows behavior was not tested at all yet (don’t like starting my win WM :/ ), do not expect (too much) issues on this platform, but you never know with Windows. Cheers, and hope that new year will be full of good things for Blender and all of you!

Duplicated from https://mont29.wordpress.com/2015/01/14/assets-filebrowser-preliminary-work-experimental-build-i/

Categorías: Diseño 3D

Committee begins review of High Priority Projects list -- your input is needed

FSF - Lun, 12/08/2014 - 23:40

This announcement was written by the FSF's volunteer High Priority Projects Committee.

Nine and a half years ago the first version of the High Priority Free Software Projects (HPP) list debuted with only four projects, three of them related to Java. Eighteen months later, Sun began to free Java users. The current HPP list includes fourteen categories mentioning over forty distinct projects. Computing is ever more ubiquitous and diverse, multiplying challenges to surmount in order for all computer users to be free.

Undoubtedly there are thousands of free software projects that are high priority, each having potential to displace non-free programs for many users, substantially increasing the freedom of those users. But the potential value of a list of High Priority Free Software Projects maintained by the Free Software Foundation is its ability to bring attention to a relatively small number of projects of great strategic importance to the goal of freedom for all computer users. Over the years the list has received praise and criticism -- frankly not nearly enough, given the importance of its aims -- and been rebooted. As the list approaches its tenth year, we aim to revitalize and rethink it, on an ongoing basis.

The first step has been to assemble a committee which will maintain the list, initially composed of the following free software activists: ginger coons, Máirín Duffy, Matthew Garrett, Benjamin Mako Hill, Mike Linksvayer, Lydia Pintscher, Karen Sandler, Seth Schoen, and Stefano Zacchiroli. The committee has drafted this announcement and the following plan.

We need your input! Send your suggestions of projects to hpp-feedback@gnu.org. Remember, we're looking for projects of great strategic importance to the goal of freedom for all computer users. If you wish, we encourage you to publish your thoughts independently (e.g., on your blog) and send a us a link. Keep in mind that not every project of great strategic importance to the goal of freedom for all computer users will be a software development project. If you believe other forms of activism, internal or external (e.g., making free software communities safe for diverse participants, mandating use of free software in the public sector), are most crucial, please make the case and suggest such a project!

Based on the received input, the current content of the list, and our own contributions, we will publish a substantially revised list and an analysis before LibrePlanet 2015 and expect a lively discussion at that event. If we are successful, we will have the immediate impact of bringing widespread coverage of free software movement strategy and the ongoing impact of garnering substantial attention and new effort for listed projects. (Note that we're also interested in outreach and measurement suggestions. A revised and maintained list is necessary but not sufficient for success.)

Finally, we've already made a few minor changes to the HPP list in order to fix long-standing issues that have been reported in the past. We are looking forward to your feedback at hpp-feedback@gnu.org as we work on more substantial improvements!

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

Media Contacts

John Sullivan
Executive Director
Free Software Foundation
+1 (617) 542 5942
campaigns@fsf.org

Categorías: Software Libre

Future viewport, the design

Blender - Mar, 12/02/2014 - 17:34

As outlined in the previous post there are some technical and feature targets we want to achieve. Recapping here:

1) Performance boost for drawing code. Make sure we use the best drawing method always to pass data to the GPU/Support features that are only available on new OpenGL that will enable better performance and code.

2) Node based material definition for viewport – and definition of a new real – time material system used for rendering (GLSL renderer).

3) Compositing. Includes things such as outlines, depth of field, ambient occlusion, HDR, bloom, flares.

4) Support mobile devices.

What is the state so far:

* Limited compositing (in viewport_experiments branch). When we say limited we mean that the compositor is not tied up to the interface properly, rather it just applies effects to the whole contents of the framebuffer. What we would want ideally, is to not allow UI indicators, such as wires or bones from affecting compositing. This is not too hard to enforce though and can be done similarly to how the current transparency/Xray system works, by tagging wire objects and adding them to be rendered on top of compositing.

* Some parts of our mesh drawing code use Vertex Buffer Objects in an optimal way, others do but still suffer from performance issues by not doing it right, while others do not use it at all.

How will the soc_2014_viewport_fx branch help achieving the targets?

Soc-2014_viewport_fx is providing a layer that can be used to migrate to newer or mobile versions of OpenGL with less hastle, but also tries to enforce some good rendering practices along the way, such as the requirement in modern versions of OpenGL that everything is rendered through Vertex Buffer Objects. Also it removes GLU from the dependencies (since it uses deprecated OpenGL functionality).

Also it sets in place some initial functionality so things can be drawn using shaders exclusively. This is essential if we move to modern or mobile OpenGL versions at some point.

So it mostly helps with targets 1 and 4, but more work will need to be done after merging to realize those targets fully.

At some point, if we want to support modern or mobile OpenGL, we can’t avoid rewriting a big part of our realtime rendering code. The branch already takes some care of that so the branch should be merged and worked on (merging is the first step really), unless we do not really care about supporting those platforms and features.

My estimation, from personal experiments with manual merging, is that it would take about 2-3 weeks of full time work to bring the branch to master-readiness.

Can we focus on some targets immediately?

Yes we can. Some targets such as node materials or compositing, just assume GLSL support in mesh drawing which is yet to be realized in the branch fully so it’s not really blocking their progress. However, getting the branch in as soon as possible will mean less headaches during the merge.

Viewport usability design

Draw modes

Draw modes are getting a little bit unpredictable as to what they enable and are more tied to a real time material definition limited to specular/diffuse/textured. They are also bound to the texture face data structure which is becoming less relevant since we are slowly moving to a material based approach. Often artists have to tweak a number of material and object options to get the visual feedback they need, which can also be frustrating and it is not apparent to new users either. We need a design which allows artists to easily work on a particular workflow while being able to visualize what they want without extensive guesswork of how to visualize this best. Ideally we want to drop draw modes in favour of…

Workflow modes (model, sculpt, paint, animation, game shader design)

Different workflows require different data, and different visualizations. So we can define ‘workflow modes’, which includes a set of shaders and visualization options authored specifically for the current workflow. For instance, a ‘workbench’ mode in edit mode will have a basic diffuse and specular shader with wireframe display options. For retopology, it would make sense to use more minimal, transparent mesh display, like hidden wire, with depth offsetting to avoid intersection artifacts.

Example image of edit mode display options. Some options exist to aid in specific workflows, but this is not so readily apparent

For material definition or texture painting, users might want the full final result or an unshaded version of it for detail tweaking.

Debugging (logic, rigging, etc)

Drawing can offer visual feedback to make it easier for users to examine problematic areas in their scenes. Examples include order of dependency calculation or color-encoded vertex and face counts, or even debug options available to developers.


Easy to switch from one to another, easy to config or script

Using the workflow system, users should be able to get their display to be more predictable. Each workflow mode can expose settings for the shaders or passes used but we can allow more customization than this. A node interface will allow users to request data from blender and write their own shaders to process and visualize these data in their own way. We will follow the OSL paradigm with a dedicated node that will request data from blender in the form of data attribute inputs connected to the node. The data request system is at the heart of the new data streaming design and this means that materials and custom shaders should be able to request such data. Probably even access to real time compositing will be included, though memory consumption is a concern here, and we need to better define how data will be requested in that case.


Modernize! Assume that users will always want the best, most realistic, etc.

With the capabilities modern real time shading offers, we aim to add a third render engine using OpenGL, (next to internal and cycles) which can leverage the capabilities of modern GPUs and tailored to make real time rendering a real alternative for final rendering in blender. A lot of the components are already there, but we can push it further, with shader implementations optimized especially for real time rendering instead of trying to mimic an off-line renderer.

We want to make sure that our material display is pleasing, so we are exploring more modern rendering methods such as physically based shading (a patch by Clement Foucault using notes from Unreal Engine 4 is already considered for inclusion) and deferred rendering.

Needless to say this will also mean improved preview of materials for blender internal and cycles.

Categorías: Diseño 3D