An update to VOSC Visual Particle Synthesizer is now live online. It includes a new RANDOMIZE function, which loads random values for all parameters of all oscillators. Since it’s random, a lot of times the result won’t look like anything, but every once in a while something interesting will pop up. It’s a great way to explore the realm of the program’s capability, and to find a place to start tweaking from.
The update also includes a way to cap the particle resolution so that loading patches that have high settings won’t slow your machine down to a crawl. Find it in the RES panel.
This update is live online and on the Android version. Pending approval on iOS.
VOSC is now live online. Use it right in your browser here, where you can also find instructions for getting started:
Here are a couple screens from an app currently under development, to be released on iOS and Android, with other platforms/versions to follow.
It could be described as a visual particle synthesizer. It consists mainly of 4 oscillators that control the position of a vast array of on-screen particles. Adjustments to the frequency and shapes of the oscillators can produce some very intricate moving patterns.
It’s an evolution and systemization of the phaSing series of flash toys that I made quite a while back. Here, the rendering and most of the “synthesis” calculations are performed using GPU shaders written in AGAL for the Stage3D feature of Flash Player / AIR.
I’ve been developing a game for mobile devices, primarily tablets, and my first choice was to use AIR for mobile and one of the nice new Stage3D game libraries. I chose ND2D over Starling and Genome2D, since unlike Starling the API was flexible and nicely similar to the Flixel and FlashPunk engines, and unlike Genome2D was open source. Using the ROLF fork I was able to get performance comparable to Genome2D.
The rendering performance of all these engines is pretty impressive, about on par here with native rendering on mobile devices, with some language overhead only when you get into the thousands of objects. Problem is the game also makes use of a physics engine, and it this is where the limitations of the platform really start to become a problem. Using the as3 version of the Nape engine (a really nice engine – way faster that Box2DAS3 and with a much better API), I was only able to simulate around 50 dynamic bodies on an iPad2 (hardly a low end device) with decent frame rates (above 40 FPS).
So I decided to run some tests using HaxeNME. Since it compiles to C++ and runs without a VM I expected to see a big performance gain. Using DrawTiles for batched rendering and the Haxe version of Nape for physics I was able to get a pretty good improvement here – about 90 dynamic bodies before dropping below 40FPS on the iPad2. Since I really would like to target lower end devices and am building creatures that each have over 20 dynamic bodies, this was still pretty limiting though.
So I took a look at native, deciding on Cocos2Dx for the rendering, which is an open source C++ port of Cocos2D, and the Chipmunk physics engine. Here I was able to get over 450 colliding objects before the frame rate even started to drop below 60 – more than a 10 fold increase in performance over the best possible solution in AS3, and more than 5 fold increase over HaxeNME. This was way more than I expected, especially given that the rendering speed across the various platforms, all utilizing OpenGL ES, is virtually the same.
I guess the take away from this is that although Stage3D was a great step forward for the Flash platform in bringing native rendering (and I’m still using it in other projects), when it comes to computationally intense features of games like physics or AI it still isn’t a viable platform, at least on mobile. Recent developments like the new Falcon compiler and AS workers are nice but Adobe is going to have to make more radical changes to the language and VM to be a competitive platform for gaming. Here’s to hoping that hints about Actionscript 4 are followed through on.