Recently I've been looking at Povray, pyprocessing, and cfdg (version 3.0) as tools for creating digital images. I have branched two separate blogs where I mainly explore jruby + processing and processing.py

Friday, 25 March 2011

Pushing the limits with my povwriter library (A load of balls!)

Here is another processing sketch featuring my processing povwriter library. Owing to the nature of the way the exporter works (exports mesh triangles, rather than sphere primitives) the following test which is based on the processing DXF export examples, pushes the limit of my export library. Even with my degenerate triangle filter povray complains about these when rendering the output.pov.

Here is the processing sketch:-

import povexport.*;
import povexport.povwriter.*;

boolean record = false;

void setup() {
  size(400, 400, P3D);
  noStroke();
  sphereDetail(18);
}

void draw()
{ 
  lights();        // this needs to be outside the record loop
  if (record) {    
    translate(0, height/4, 0);
    noLights();    // let PovRAY do the lighting
    noLoop();      // don't loop while recording sketch
    beginRaw(RawPovray.POV, "balls.pov");
  }
  render();
  if (record) {
    endRaw();
    record = false;
    loop();
  }
}

void render() {  
  background(0);
  translate(width / 3, height / 3, -200);
  rotateZ(map(mouseY, 0, height, 0, PI));
  rotateY(map(mouseX, 0, width, 0, HALF_PI));
  for (int y = -2; y < 2; y++) {
    for (int x = -2; x < 2; x++) {
      for (int z = -2; z < 2; z++) {
        pushMatrix();
        translate(120*x, 120*y, -120*z);
        sphere(30);
        popMatrix();
      }
    }
  }
}

void keyPressed() {
  switch(key) {
  case 'r': 
    record = true;
  }
}



Cornell_Box template


















Here is the same processing scene using a simpler povray template:-

Here are the changes I made to the output.pov file to create golden balls:-

//#declare Colour0 = rgb<1, 1, 1>;             // White
#declare Colour0 = color Gold;             // Post production edit of color

simple_scene.pov template

Wednesday, 23 March 2011

Exporting Processing Sketches to Povray

Previously I have explored the export of processing sketches to Povray using a modified supercad library (by Guillaume La Belle), however he doesn't seem to be interested in doing anything with the existing library, so I have started developing my own version which is hosted at java.net. This is partly an experiment with hosting at java.net, so far I have found integration with NetBeans to be as good as project kenai. Where my project differs from Guillaume LaBelles library is that mine uses a separate template.pov file, rather than a hard coded template. Also rather than a dodgy kludge to avoid degenerate triangles my version has a degenerate filter built in. Here is the output of my T_test.pde rendered in povray 3.7 using a pov template using radiosity and a cornell box template (see previous posting). Another feature of my template is that processing part of the pov file is wrapped in a union, this means you can scale, rotate, and translate the processing bit within the scene. Download a beta version of my library here get povray-3.7-beta here. Update 19 May 2012 I am now hosting an experimental version to support processing-2.0 at github, please try that out with processing-2.06a or more recent version from svn. Actually since January 2013 I've reverted to using java.net to host development of the library for processing-2.0.


Monday, 21 March 2011

Re-factored (1L) context sensitive sketch in (vanilla) processing

Here I have re-factored the sketch in my previous blog entry to use the Pen and PenStack classes from my LSystems library, by doing so I have avoided need for affine transforms and matrix operations (the latter can be confusing). The thing to note is that if you need to add a non context sensitive rule using my library the 'premis' should be character, and the rule a String.
Obviously the context sensitive 'premis' must be a String, currently (at version 0.7.0) my LSystem library only supports (1L) context sensitive rules (further it does not support null or wild card context characters). I might think about supporting (2L) context sensitive rules at some stage.

/**
 * cs_test3.pde by Martin Prout
 * Demonstrates a simple (1L) context sensitive grammar with ignored
 * symbols. Makes use of Pen and PenStack from LSystem utilities, avoids
 * use of processing affine transforms and matrix operations.
 */

import lsystem.turtle.*;
import lsystem.collection.*;
import lsystem.CSGrammar;

CSGrammar grammar;
float distance = 30;
float THETA = radians(30);
color startColor = color(255, 0, 0);
color endColor = color(0, 255, 0);
float drawLength = 30;
void setup() {
  size(350, 180);
  createGrammar();  
  strokeWeight(4);
  iterateGrammar();
}

void createGrammar() {
  String axiom = "G[+F]F[-F][+F]F";  
  grammar = new CSGrammar(this, axiom); // initialize library
  grammar.addRule("G<F", "G");          // add cs replacement rule
  grammar.setIgnoreList("-+[]");
}

void render(float xpos, float ypos, String production) {
  PenStack stack = new PenStack(this); // initialize local stack
  float theta = -PI/2; // this way is up in the processing environment
  Pen pen = new Pen(this, xpos, ypos, theta, drawLength, startColor); 
  CharacterIterator it = grammar.getIterator(production);
  for (char ch = it.first(); ch != CharacterIterator.DONE; ch = it.next()) {
    switch (ch) {
    case 'F': 
      pen.setColor(startColor);    
      drawLine(pen);
      break;
    case 'G': 
      pen.setColor(endColor);    
      drawLine(pen);
      break;  
    case '-':
      pen.setTheta(pen.getTheta() - THETA);
      break;
    case '+':
      pen.setTheta(pen.getTheta() + THETA);
      break;
    case '[':
      stack.push(new Pen(pen));
      break;
    case ']':
      pen = stack.pop();
      break;     
    default:
      System.err.println("character " + ch + " not in grammar");
    }
  }
}

void iterateGrammar() {
  for (int i = 0; i < 6; i++) {  
    String production = grammar.createGrammar(i);
    float xpos = 40 + (i * 50);
    float ypos = height * 0.9;
    render(xpos, ypos, production);
  }
}

void drawLine(Pen pen) { // draws line and sets new pen position
  float x_temp = pen.getX(); 
  float y_temp = pen.getY();
  pen.setX(x_temp + pen.getLength() * cos(pen.getTheta()));
  pen.setY(y_temp + pen.getLength() * sin(pen.getTheta())); 
  stroke(pen.getColor());
  line(x_temp, y_temp, pen.getX(), pen.getY());
}


Sunday, 20 March 2011

Another (1L) context sensitive sketch in (vanilla) processing

Here is another sketch that uses my LSystem Library (> version 0.7.0) to explore (1L) context sensitive rules (NB: the ignore character list may be initialized with a either a string or character array). Certain combination of replacement rules and ignore lists may still lead to "out of range errors", although I am not sure real world examples would be affected?

/**
* cs_test2.pde by Martin Prout
* Demonstrates a simple (1L) context sensitive grammar with ignored
* symbols. 
*/

import lsystem.turtle.*;
import lsystem.collection.*;
import lsystem.CSGrammar;

CSGrammar grammar;
float distance = 30;
float THETA = radians(30);

void setup() {
  size(500, 200);
  createGrammar();  
  strokeWeight(4);
  translate(width * 0.4, height * 0.7);
  iterateGrammar();
}

void createGrammar() {
  String axiom = "G[+F]F[-F][+F]F";  
  grammar = new CSGrammar(this, axiom); // initialize library
  grammar.addRule("G<F", "G");          // add cs replacement rule
  grammar.setIgnoreList("-+[]");
}

void render(String production) {   
  CharacterIterator it = grammar.getIterator(production);
  rotate(-PI/2);
  for (char ch = it.first(); ch != CharacterIterator.DONE; ch = it.next()) {
    switch (ch) {
    case 'F': 
      stroke(255, 0, 0);    
      line(0,0, distance, 0);
      translate(distance, 0);
      break;
    case 'G': 
      stroke(0, 255, 0);     
      line(0,0, distance, 0);
      translate(distance, 0);
      break;  
    case '-':
      rotate(THETA);
      break;
    case '+':
      rotate(-THETA);
     break;
     case '[':
      pushMatrix();
      break;
    case ']':
      popMatrix();
     break;     
    default:
      System.err.println("character " + ch + " not in grammar");
    }
  }
}

void iterateGrammar(){
  for (int i = 0; i < 6; i++){  
  String production = grammar.createGrammar(i);
  translate(40, 0);
  pushMatrix();
  render(production);
  popMatrix();
  }
}




Wednesday, 16 March 2011

Exploring context-sensitive L-system (vanilla-processing)

I have just uploaded the latest version of my LSystem Utilities library at project kenai, that now includes tools for working with context sensitive grammars. Here is a simple example included with my utilities, that demonstrates one of the features of a context sensitive lsystem. Update 20th March 2011, I think I might have finally solved the bug in my context sensitive grammar. See version 0.7.0 which I think solves that problem and also allows context sensitivity to scroll around, a behaviour that was effortlessly achieved in my ruby-processing prototype. Note my vanilla processing requires non-context sensitive premis to be a single java char (ie not a java String). The context sensitive premis is a java String, where the middle character is either '<' (means context char before) or '>' (context char follows). The third char of the premis is the one that is replaced, according to the context rules, the first char provides the 'context'.

/**
* cs_test.pde
* Demonstrates a simple context sensitive grammar without ignored
* symbols. No real sketch, just console output that demonstrates
* character moving in the string (without adjusting length).
* NB: you may need to scroll the console to see all the output.
*/
import lsystem.collection.*;
import lsystem.CSGrammar;

CSGrammar grammar; 

void setup() {
  createGrammar();
  testGrammar();
}

void createGrammar() {
  String axiom = "baaaaaaa";  
  grammar = new CSGrammar(this, axiom); // initialize library
  grammar.addRule("b<a", "b");          // add cs replacement rule
  grammar.addRule('b', "a");            // add simple replacement rule  
}

void testGrammar(){
  for (int i = 0; i < 8; i++){
  String production = grammar.createGrammar(i);
  println(production);
  }
}

/**
Test Output, note the string does not grow in length,
but the b character travels along the string.

baaaaaaa
abaaaaaa
aabaaaaa
aaabaaaa
aaaabaaa
aaaaabaa
aaaaaaba
aaaaaaab
*/

This facility could potentially be used in an animation, or to control musical scales?

Sunday, 6 March 2011

Alhambra Tiling in processing.py (Version 4) using lerp in a python module

I was just messing with my Alhambra sketch and found that rather surprisingly (to me but I'm no python buff so easily surprised) I had access to processing functions (notably lerp here) in a module (ie similar to the inner class access in vanilla processing). This contrasts somewhat with ruby-processing where Jeremy Ashkenas came up with a Processing::Proxy mixin to achieve the same type of access. You will find that if you substitute the following tpoint module in place of the one in my version 3, you get the same result. Or change the ratio in the Alhambra sketch to distort the tiles somewhat.

   1 """
   2 tpoint.py is module that contains my lighweight TPoint class
   3 avoids accessors by having public attributes (as python default)
   4 """
   5 
   6 class TPoint(object):
   7     """
   8     A lightweight convenience class to store triangle point data
   9     With a function to return new instance a the mid point between 
  10     two TPoint objects
  11     """
  12     
  13     def __init__(self, x = 0, y = 0):
  14         """
  15         Initialise the new TPoint object
  16         """
  17         self.x = x
  18         self.y = y 
  19        
  20         
  21     def mid_point(self, tvector, ratio = 0.5):
  22         """
  23         Returns a linear interpolation point where ratio of length from current point to
  24         input point is a ratio, default yields the mid point (lerp is a processing function)
  25         To make sense the ratio value should be less than 1.0 in practice you can use bigger
  26         values, it just skews the point in a different way.
  27         """
  28         return TPoint(lerp(self.x, tvector.x, ratio), lerp(self.y, tvector.y, ratio))
  29 
  30         
  31     def add(self, tvector):
  32         """
  33         Adds tvector TPoints to self
  34         """
  35         self.x += tvector.x
  36         self.y += tvector.y

Of course when you run the sketch you will find that your tpoint module has been compiled to tpoint$py.class, so I guess this is treated as an "inner class"? My feeble attempts at decompiling the code with jode heven't worked yet...
I suppose I need to find the enclosing class (wherever that gets stored, I should check tmp? Actually there is jffi***.tmp file produced but that's not much use).

Friday, 4 March 2011

Trying out itertools in processing.py

Continuing my explorations of the differences between python / ruby / java(processing) I have recently created a context sensitive LSystem grammar library in ruby-processing. I'm not sure I could have done it in python or java for that matter (it is not that I don't think it can be done, because I intend to have a good go at repeating the exercise in processing.py and then java, but it should now be easier, having done it once). It is just that the semantic load seem to be much less in ruby. Take one simple thing in ruby you want to do something x number of times, it could not be simpler:-
x.times do
...
and you are done!!!!!
Well that is how I found itertools for python, it is interesting so I will play with it but it certainly does not lessen the semantic load (in fact it is mainly a distraction). But I'm not happy with the for i in range(0, x): loop when I have no intention of using i. However it turns out, that apart from the semantically nice 'repeat' , the syntax of the itertools is even crazier, for efficiency it is suggested the following form is used:-
for _ in itertools.repeat(None, x):
Where the count is not tracked?
One neat trick (in python) I learnt was that you can put import statements in a function (and so avoid namespace clashes) here is an extract from my modified grammar.py:-

55 def repeat(rpx, axiom, rules):
56     """
57     Repeat rule substitution in a recursive fashion rpx times
58     """ 
59     production = axiom
60     from itertools import repeat
61     for _ in repeat(None, rpx):
62         production = produce(production, rules)
63     return production

Followers

Blog Archive

About Me

My photo
Pembrokeshire, United Kingdom
I have developed JRubyArt and propane new versions of ruby-processing for JRuby-9.1.5.0 and processing-3.2.2