oGFx is a networked interactive graphics programming environment for motion graphics. The following two paragraphs explain the foundations of our vision behind the oGFx graphics engine.


The exponential growth in computer power over the last two decades has delivered a visually compelling and interaction rich experience that encourages searching for a redefinition of the way in formation is displayed onscreen. Graphics programming is both an actor and a subject in the quest to shape digital information in meaningful new ways, providing rapid means to experiment with interactive environment prototypes. In that spirit, recent efforts have been made to design programming environments that favor visual inter faces, avoiding the difficult task of writing code. However, these efforts shelter users from dealing with core concepts of the underly ing graphics pipeline, and prevent access to full featured resources that are only available by writing code. We think that experiment ing with code provides users with a more powerful understanding of graphics programming, and the means to break away from conceptual limitations like the separation between 2D and 3D graphics environments. Our goal is to provide an experimental platform that can help change the way motion graphics are envisioned.

Design Principles

The design of any interactive graphics programming environment should encourage modularity and connectivity. Resources should be packaged to be developed and tested separately, with easy ways to connect them at runtime to test their interaction. Based on the idea that it is easier to envision two dimensional mathematical structures than their three dimensional counterparts, we have placed the two dimensional graphics principles in the top layer of our system’s hierarchy. Intuitive natural alterations of a flat surface, like deformations or fragmentations, are used as a starting point to project the dynamic 2D textures onto a rich three dimensional environment. By having full control over the object model that runs the 2D context and enough connections to reach down into the lower levels of the graphics pipeline, we can inject parameters from the programming environment across dimensions, allowing synchronization of dynamic effects at all levels, from texture to vertices, geometry, and fragments.


This is an introductory video, and you can find pictures here here and here.