You can combine css transitions with javascript in order to highlight an element and fade the highlight out over e.g. 2 seconds while the actual style of the element is reverted quickly to the original state.

Here is a quick example:

Try it out live here: https://leovt.github.io/leovt/highlight.html

The animation is controlled by the browser, you do not have to worry about what happens when the animation is finished, to reset all sttyles properly:

The code works as follows:

  1. The normal style of the element contains a rule for the transition from the highlighted state back to the original.
  2. The highlighted style disables the transition, so as to immediately highlight without any delay.
  3. When clicking the button the highlighted state is set, and (almost) immediately reset with a setTimeout callback.

Define the normal style of the element and the highlighted state:

      .score {
        transition: all 1.5s ease-out;
        background: lightblue;
      }

      .score.hit {
        transition: none;
        background: crimson;
        color: white;
      }

The event handler for the button click is very simple. Note the timeout uses a delay of 50ms. With a delay of 0ms I could not get a reliable result, it seems the change to .hit and back was so fast the engine could not register it properly.

      function addPoints() {
        var element = document.getElementById('sc1');
        element.innerText = (+element.innerText) + 100;
        element.classList.add('hit');
        window.setTimeout( () => element.classList.remove('hit'), 50);
      }

The example as a self-contained page: https://github.com/leovt/leovt/blob/master/highlight.html

The sample Program from the previous post not only demonstrated some OpenGL calls and concepts, but also illustrates that code organization quickly becomes important as soon as the program grows beyond a couple of lines.

So I refactored some of the OpenGL Interface code into classes. I noted that for each draw call I need to setup the OpenGL context by binding some Objects like the shader program, the target framebuffer or the source texture.

I also learned that after you are done using any OpenGL object it is helpful to unbind it. The reason for this is not that they would use too much resources, but rather decoupling. It might be that your program just coincidentally has the right objects bound but as soon as you move any of the code, something will break. Explicitly unbinding helps avoid these dependencies of different parts of your program on what GL objects other parts might have bound last.

So my drawing operations started to look like this

# Bind needed objects 
framebuffer.bind()
render_program.use()
texture.bind()

gl.glDrawSomething(...)

# Unbind Objects
texture.unbind()
render_program.end_use()
framebuffer.unbind()

Just by renaming the matching method pairs .bind() and .unbind() to __enter__ and __exit__ the objects become context managers and the code above becomes much cleaner

with framebufer, render_program, texture:
    gl.glDrawSomething(...)

This style has the advantage of being able to specify the objects used for a draw call in a single with: line. On the other hand it also removes flexibility on the bind and unbind times of the objects. I think in most cases this is not a disadvantage because most times many objects depend on each other logically (e.g. use this texture with that shader together).

A refactored version of the last posts example which uses the context managers is available on github.

This post will show you how to set up a framebuffer in OpenGL using Python and Pyglet.

See my previous post for some tips how to use OpenGL functions from Python.

Structure of the Program

As this is a demo program I did not structure it with python classes. However the complexity is already big enough to see that this flat structure is not suitable for any larger program.

The following illustrates the drawing steps. The OpenGL objects in the grey boxes are globals
in my python program.
Flowchart for the Draw Event

There are two steps: first a triangle is rendered to a texture using a framebuffer, and second the texture is copied to the screen. Each step uses

  • A Shader Program containing the GLSL code used for rendering
  • A Vertex Buffer containing vertex data for the objects being drawn
  • A Vertex Array Object containing the information how the data
    in the buffer is linked to the vertex attributes in the shader program

The full program is available on github.

Scene description

In this example the framebuffer is used to render the triangle from the previous post in a very low resolution (30×20 px). This Image is then used for texturing two rectangles in the main window.

Note that both in the framebuffer and on the main screen I do not use any vertex transformation. Therefore only the x and y coordinates are used. The lower left corner of the screen has coordinates (-1, -1) and the upper right corner (1, 1). Texture coordinates however run from 0 to 1.

framebuffer

Vertex Array Objects

In the previous program there was only one GLSL program and only one vertex buffer. Therefore it was convenient to store the vertex attribute bindings in the default OpenGL state. In this program however we switch back and forth between the triangle rendering program which uses color attributes for the vertices and the copy to screen program which uses texture coordinates.

A vertex array object stores the information how data in a buffer is linked to vertex attributes of the shader program. So I can define this connection once (using glVertexAttribPointer) and then I only need to select the correct vertex array object when drawing.

def setup_render_vertexbuffer():
    ...
    gl.glBindVertexArray(render_vao)

    gl.glEnableVertexAttribArray(loc_position)
    gl.glEnableVertexAttribArray(loc_color)

    # the following bindings will be remembered by the vao
    gl.glBindBuffer(gl.GL_ARRAY_BUFFER, render_vertexbuffer)

    gl.glVertexAttribPointer(loc_position, 2, gl.GL_FLOAT, False, 
         ctypes.sizeof(COLOR_VERTEX), 
         ctypes.c_void_p(COLOR_VERTEX.position.offset))
    gl.glVertexAttribPointer(loc_color, 4, gl.GL_FLOAT, False, 
         ctypes.sizeof(COLOR_VERTEX), 
         ctypes.c_void_p(COLOR_VERTEX.color.offset))
    gl.glBindVertexArray(0)


def render_to_texture():
    ...
    # draw using the vertex array for vertex information
    gl.glBindVertexArray(render_vao)
    gl.glDrawArrays(gl.GL_TRIANGLES, 0, 3)
    gl.glBindVertexArray(0)

Gotcha: The Viewport

OpenGL needs to know the size of the rendering target. In this program there are two targets, so you need to tell OpenGL the new size every time you switch the drawing target.

def render_to_texture():
    gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, framebuffer)
    gl.glViewport(0, 0, FB_WIDTH, FB_HEIGHT)
    ...

def copy_texture_to_screen():
    gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, 0)
    gl.glViewport(0, 0, window.width, window.height)
    ...

Putting it together

The program has 300 lines, and as I mentioned before, it could benefit well from more structure and also from deduplication of code. Setting up two GLSL shader programs, two vertex array objects, two vertex buffers and two drawing steps has lead to some duplication, which could be avoided if these were all each two instances of a shader class, a vertex array class and a vertex buffer class.

def draw():
    render_to_texture()
    copy_texture_to_screen()

def main():
    global window
    window = pyglet.window.Window()

    setup_framebuffer()
    setup_render_program()
    setup_render_vertexbuffer()
    setup_copy_program()
    setup_copy_vertexbuffer()

    window.on_draw = draw
    pyglet.clock.schedule_interval(lambda dt:None, 0.01)
    pyglet.app.run()

Try out the complete program. You can get it on github.
Please comment below if something does not work or what I could explain better.

Calling some OpenGL functions from Python with pyglet can be a bit tricky due to fact that the functions are only thin wrappers around the C-API.

My sample program demonstrates some calls for using modern OpenGL with pyglet. The program is intentionally quite flat, without any classes. In a larger program I would manage the involved OpenGL objects with Python classes.

triangle in a program window

This is not a full tutorial, but some aspects of the sample program are explained below.

Pyglet

I used Pyglet because it includes an OpenGL binding and supports Python 3.

pyglet provides an object-oriented programming interface for developing games and other visually-rich applications for Windows, Mac OS X and Linux.

Call by Reference

Many OpenGL calls use a call by reference / pointer for returning values. These can be called using ctypes.byref.

The example first creates a GLuint initialized to zero and then passes a pointer to glGenBuffers.

from pyglet import gl
import ctypes

vertexbuffer = gl.GLuint(0)
gl.glGenBuffers(1, ctypes.byref(vertexbuffer))

Structs for Vertex Attributes

Using a struct for passing vertex attributes can take advantage of the meta-information provided by ctypes. It is also easier to manage than a flat array of float values. In the example the vertex shader takes a vec2 and a vec4 as attributes.

Note that as in the example above I use the type aliases provided by pyglet.gl in order to ensure compatible types with OpenGL.

In ctypes you create an array by multiplying the type object.

class VERTEX(ctypes.Structure):
    _fields_ = [
        ('position', gl.GLfloat * 2),
        ('color', gl.GLfloat * 4),
    ]

The structure fields are given an offset attribute which can be used for passing to glVertexAttribPointer

gl.glVertexAttribPointer(loc_position, 2, gl.GL_FLOAT, False,
                         ctypes.sizeof(VERTEX),
                         ctypes.c_void_p(VERTEX.position.offset))
gl.glVertexAttribPointer(loc_color, 4, gl.GL_FLOAT, False,
                         ctypes.sizeof(VERTEX), 
                         ctypes.c_void_p(VERTEX.color.offset))

And the structure is initialized very easily; here I create an array of three vertices, each containing a 2D position and a 4D color.

data = (VERTEX * 3)(((-0.6, -0.5), (1.0, 0.0, 0.0, 1.0)),
                    (( 0.6, -0.5), (0.0, 1.0, 0.0, 1.0)),
                    (( 0.0,  0.5), (0.0, 0.0, 1.0, 1.0)))
gl.glBufferData(gl.GL_ARRAY_BUFFER, ctypes.sizeof(data), 
                data, gl.GL_DYNAMIC_DRAW)

String Arguments

OpenGL expects C-style strings, so it is easiest to use byte strings.
ctypes has string buffers which can tranlate between bytes in Python and char* in C.

loc_position = gl.glGetAttribLocation(program,
                      ctypes.create_string_buffer(b'position'))

They can also be used for retrieving strings from OpenGL such as the log of a shader compilation.

length = gl.GLint(0)
gl.glGetShaderiv(shader_name, gl.GL_INFO_LOG_LENGTH, ctypes.byref(length))
log_buffer = ctypes.create_string_buffer(length.value)
gl.glGetShaderInfoLog(shader_name, length, None, log_buffer)

Finding out about getting the compilation log really helped me when writing my own shaders.

The code for passing the shader source to OpenGL is still somewhat messy with a ctypes cast. I would be glad if you can suggest a better alternative in the comments.

src_buffer = ctypes.create_string_buffer(shader_source)
buf_pointer = ctypes.cast(ctypes.pointer(ctypes.pointer(src_buffer)),
                          ctypes.POINTER(ctypes.POINTER(ctypes.c_char)))
length = ctypes.c_int(len(shader_source) + 1)
gl.glShaderSource(shader_name, 1, buf_pointer, ctypes.byref(length))

I have written a small HTML 5 game. It uses SVG for the presentation and the jQuery Javascript Library. I hope it can serve you as an example for a simple interaction with JavaScript

Try it out or look at the sources in github.

tictactoe

How it works:

The game consists of three files:

  1. tictactoe.html is a HTML 5 file with an embedded SVG graphics representing the playfield.
  2. tictactoe.js contains the javascript code
  3. jquery which is linked in directly from the google CDN

The code in tictactoe.js is using jQuery to hide and show the O’s and X’s in the playfield as the game proceeds.

jQuery is a java script library which makes it easy to manipulate the DOM, i.e. the structure of the document while it is being displayed in the browser. It also abstracts away differences between different browsers. Google hosts several versions of jQuery so you dont even need to have it on your server, only a reference to the google hosted copy is needed. This has the further advantage that the library is possibly already cached when the user has used a different site with jQuery before.

In this post I will show how coverage.py can help you write complete tests and how to write a test for a case that should not ever occur.

Meanwhile I am adding more bytecodes to my interpreter, so if you are interested in my implementation of python bytecodes take a look at https://github.com/leovt/interp/blob/master/interp.py.

In my last post i described why and how I switched to unittest testing. Running the tests after every code change makes it much more comfortable and I can be reasonably confident that the cange did not break what was working before.

But there is a nagging question: Do I really test all possible paths in my code? Did I forget to test some exceptional code path? The answer to this question is given by Ned Batchelders coverage.py

This script will record which lines of code are executed and then produce a nice report.

Installation

I am using ubuntu, so I could install it simply with

sudo apt-get install python-coverage

Running the coverage tool

First clear the results of previous runs

leonhard@leovaio:~/interp$ python-coverage erase

Run the unittest with the coverage tool

leonhard@leovaio:~/interp$ python-coverage run interp_test.py -b
...s........
----------------------------------------------------------------------
Ran 12 tests in 0.019s

OK (skipped=1)

Then run coverage again to produce a report.

leonhard@leovaio:~/interp$ python-coverage report -m
Name                                     Stmts   Miss  Cover   Missing
----------------------------------------------------------------------
/usr/share/pyshared/coverage/collector     132    127     4%   3-229, 236-244, 248-292
/usr/share/pyshared/coverage/control       236    235     1%   3-355, 358-624
/usr/share/pyshared/coverage/execfile       35     14    60%   3-17, 42-43, 48, 58-65
interp                                     182      1    99%   269
interp_test                                103      8    92%   44-52, 122
----------------------------------------------------------------------
TOTAL                                      688    384    44%   

The line that interests me is the one for interp.py

interp                                     182      1    99%   269

Apparently my tests miss one line, line 269 in interp.py. This line is throwing the exception, when an unknown bytecode is found. How can I write a test that executes this line?

I could simply run my interpreter with bytecodes that I have not yet implemented, but this would invalidate the test when I add those bytecodes later. Therefore I want to test with a bytecode that does not exist in Python, e.g. bytecode 255.

Obvisually I cannot create a code object with this code in it just by definig a function. I use a technique called mocking which takes advantage of duck-typing in Python. I don’t really need to pass a code object to my interpreter, any object with the right attributes will do. There are libraries that help creating mock objects, but for this simple use case I will just roll my own:

        class Mock(object):
            co_code = '\xff\x00\x00'
            co_nlocals = 0

I can then verify that this illegal bytecode is acutally causing line 269 to be executed and the exeption being raised by adding the unittest

    def test_unknown_opcode(self):
        class Mock(object):
            co_code = '\xff\x00\x00'
            co_nlocals = 0

        with self.assertRaises(interp.InterpError):
            interp.execute(Mock(), {})

By the way the coverage tool can also produce a very nice html report hilighting with colors the covered and not covered lines in a python file.

In this post i introduce the unittest module and set up a few basic tests for the code I have developed so far. While setting up the new testing sure enough i were surprised by an error and had to restructure one test. While the new test works, I do not think the last test case is very good, so suggestions are welcome …

So far I have added a little test function or two in the top of the interp module and changed it for every new feature I wanted to implement in the interpreter.

This has the advantage of being able to quickly try things out while writing interp.py, but it has important disatvantages:

  • Tests are in same file as “production” code.
  • I lose all the previous tests.
  • No standard way to regularly run the test.

In order to improve the testing, I am going to use the unittest module of the Python standard library.

unittest provides the TestCase class which you subclass to add your own test methods. Further there is a TestSuite class which allows you to organize and run your test cases. For the moment I am not going to use TestSuite instances but will let unittest.main take care of providing a simple command line interface for my tests.

if __name__ == '__main__':
    unittest.main()

Will provide the command line interface

~/interp$ python interp_test.py -b
...
----------------------------------------------------------------------
Ran 3 tests in 0.001s

OK

The -b switch suppresses stdout for successful tests.
There is also -v switch which will cause the test runner to name each test.

My test from the first post is implemented like this, and the one for the for loop is very similar. All tests I have so far just test that the returned value of the test function is the same in Python and in my own interpreter.

class TestRetrunValues(unittest.TestCase):
    """
    These tests are verifying that the return values
    of some functions are the same in native Python and
    in my interpreter.
    """
    def test_basic_operations(self):
        """
        Testing the first example
        """
        def test():
            a = 2
            b = a + 4
            return (a + 1) * (b - 2)

        expected = test()
        realized = interp.execute(test.func_code, {})

        self.assertEqual(expected, realized)

When I copied the test for calling a function into the test module I got surprisingly an error. Surprisingly because the same code has worked when it was directly in the interp.py.

    def test_call(self):
        """
        test from third post: calling a function
        """
        def test():
            return square(4) + square(3)

        def square(n):
            return n * n

        expected = test()
        realized = interp.execute(test.func_code, test.func_globals)

        self.assertEqual(expected, realized)
======================================================================
ERROR: test_call (__main__.TestRetrunValues)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "interp_test.py", line 50, in test_call
    realized = interp.execute(test.func_code, test.func_globals)
  File "/home/leonhard/interp/interp.py", line 148, in execute
    raise Exception('Unknown Opcode %d (%s)' % (bc, dis.opname[bc]))
Exception: Unknown Opcode 136 (LOAD_DEREF)

----------------------------------------------------------------------

What happend, and why is it different from when the test was in the interp.py file?

The difference is that now the test and square functions are not defined in the global scope. Therefore the test function, when looking for square, will not search for it in the global namespace but in its closure. I will not go into this topic more right now, what counts is that for this test to succeed I will first need to extend my interpreter more. So for the moment I am going to re-write this test case.

At the same time I do not want to lose this testcase, but I do not want to have my testing fail for this. unittest provides a decorator, @skip, which will disable the test method with a message. There are also more sophisticated, conditional skips available.

    @unittest.skip('closures (e.g. LOAD_DEREF) not yet implemented')
    def test_call(self):

In verbose mode unittest will now display the skip message and the overall test result is “OK (skipped=1)”

For implementing the amended test case I define the executed functions on module level. Actually I am not really satisfied with this solution as the test case is now split onto different parts of the interp_test.py file. If anyone could propose a better way to define this test I would appreciate much your comment!

def _call_global():
    '''
    A test function for the following unittest. 
    Must be defined in module level
    '''
    return _square(4) + _square(3)

def _square(n):
    return n * n

class TestRetrunValues(unittest.TestCase):
    def test_call_global(self):
        """
        test from third post: calling a function, at module level
        """
        test = _call_global
        expected = test()
        realized = interp.execute(test.func_code, test.func_globals)

        self.assertEqual(expected, realized)

Edit: Sometimes the answer is just too obvious: if I want square to be treated as a global, i can simply use the global keyword, of course:

    def test_call(self):
        global square
        def test():
            return square(4) + square(3)

        def square(n):
            return n * n
        [...]

You can have a look at interp_test.py developed in this post on github.

This post is again about the Python bytecode interpreter I have started in the three previous posts. This time I won’t develop the interpreter any further but rather start to set up an environment for this little project.

Until now I have just had a single test case in my main script, and as you could see if you had followed the download links I the three versions of the script were just saved as three different files, interp_1.py, interp_2.py, and interp_3.py.
This manual version control is maybe feasible as long as I have only one file and as long as I am the only one developing, but already under these conditions a finer versioning would be desirable.

I could use a local version control system, but as I want to publish the code anyway, a distributed, public version control would be the better suited tool. I am new to distributed version control systems, and have quickly looked at git and mercurial. For both there are public repositories available: GitHub and BitBucket.

I found this question on StackOverflow about the differences of git and mercurial and I think for this project it does not matter much which system I will use. I went for git / GitHub because I think it can be lower level and if want to try out more complicated things later, I would already have this as a real example. But for the moment I will probably use only basic features.

I have created leovt/interp on GitHub and have already checked in the files from the previous posts.
Each version posted previously is now a tag in the GitHub repository.

All I needed to do for setting up this repository is clearly explained on the GitHub website.

Lets extend the interpreter with the capability to call into a function. Unlike the builtin function range in the last post, I would like also the called function to be interpreted in my interpreter and not just being executed in the host Python.

def test():
    return square(4) + square(3)

def square(n):
    return n * n

In order to simplify the testing I do not want to adapt the globals directory every time I change the test function. Fortunately every function keeps a reference to the global environment it is defined in, so I will just tell the execute function to use the appropriate globals of the test function.

print 'own execute function:', execute(test.func_code, test.func_globals)

The called function will need its own space to run in, e.g. to store the local variables. Also the program counter for the called function must not overwrite the program counter in the calling function. Now it comes handy that all the execution state is stored in a frame object: we can simply create a clean new frame for the called function.
Then we simply point our variable f, holding the executed frame to the newly created frame for transferring control to the called function.

elif bc==CALL_FUNCTION:
    [...]
    elif type(callee) is types.FunctionType:
        subframe = Frame(callee.func_code, callee.func_globals)
        subframe.locals[:arg] = call_args
        f = subframe

Note that the functions arguments are just copied into the locals of the called function. In fact (ignoring any * or ** magic) the functions arguments are always the first local variables.

Lets try to execute the test:

normal Python call: 25
own execute function: 16

What happened? Apparently we see only the result of the first function call square(4). The interpreter has just quit too early – upon seeing the first return statement.

Of course when we are calling into functions we also need to transfer control back to the calling function when the called function returns. At the moment we just lose the calling frame when we transfer control to the subframe. For being able to return control to the caller we first need to keep the calling frame. We will give the frame object a reference to the caller.

class Frame(object):
    def __init__(self, code, globals, caller):
        self.locals = [None] * code.co_nlocals
        self.PC = 0
        self.stack = []
        self.globals = globals
        self.code = code
        self.caller = caller

The first function executed by our interpreter is not called from inside the interpreter, so for this first frame we set the caller to None.

f = Frame(code, globals, None)

In the CALL_FUNCTION implementation we keep the reference to the calling frame and we also update the calling frames program counter to point to the next instruction before transferring control, so everything will be ready when we decide to return.

elif bc==CALL_FUNCTION:
    [...]
    elif type(callee) is types.FunctionType:
        subframe = Frame(callee.func_code, callee.func_globals, f)
        subframe.locals[:arg] = call_args
        f.PC += 3
        f = subframe

Now everything is ready for the return statement to be implemented in the RETURN_VALUE bytecode. For this we check if we have a calling frame and if so we push the returned value on the calling frame and transfer control back.

elif bc==RETURN_VALUE:
    if f.caller is None:
        return f.stack.pop()
    else:
        ret = f.stack.pop()
        f = f.caller
        f.stack.append(ret)

And finally we get the desired result.

normal Python call: 25
own execute function: 25

You can download the code of the updated interpreter.

Edit: GitHub
I have created a GitHub repository for this project. See the revision for this post on GitHub.

A note on the value stack:
In my implementation each frame has got its own stack, so a called function has no chance of corrupting the callers stack. If the called function guarantees not to alter the bottom of the stack and also to pop off all values it pushed, then the same stack could be shared between the caller and the callee.
In this hobby / educational implementation I prefer the seperate approach as it is probably more robust and easier to debug.