Vector math basics to animate a bouncing ball in JavaScript

Vector math is pretty much essential when you want to do any kind of physics simulation, be it as simple as a bouncing ball. While my goal originally was to implement a flocking simulation (like birds flying close to each other, but not too close), the lack of math skills led me to build a bouncing ball simulation first.

At the same time, I wanted to see what Khan Academy is all about. Turns out they have lessons on vector math, but there’s very little on vectors specifically. There are two lessons on vector basics, where this second one is a lot more practical. There are also two exercises, one for addition of vectors, one for scaling. Both are pretty easy and you should be done within a minute if you watched the video.

With that knowledge under your belt, let’s look at some practical application, the aforementioned bouncing ball. To start, take a look at the demo and play around with it, there are some instructions at the top.

You can find the source code for that demo on GitHub, here is the main file for the bouncing balls demo. I’m not going to discuss the Point and Vector classes that this uses, though you should take a look. They just implement adding and scaling of vectors, and calculating a vector based on two points. Let’s walk through the code:

var GRAVITY = new Vector(0, 9.81);
var FRICTION = 0.85;
var world = {
	x1: 0,
	y1: 0
};
$(window).resize(function() {
	world.x2 = $(window).width();
	world.y2 = $(window).height();
}).trigger("resize");

This defines two constants, GRAVITY and FRICTION, which we’ll use later to affect simulated objects. GRAVITY is a vector pointing downwards, where the second component represents the 9.81 meters per second, while the first component is zero. FRICTION is used in collisions later, and completely arbitrary.

The world object is also used in collision detection, and represents the dimensions of our 2d world. It starts at 0/0 in the left top corner, and ends in the right bottom corner. We bind a resize handler on window to update this calculation, that way collisions happen within the browser window, no matter how big it currently is.

Next up is the definition of our Ball class:

function Ball() {
	this.position = new Point(200, 200);
	this.output = $("<div>").addClass("dot").appendTo("body");
	this.velocity = new Vector(-5, 0);
}
Ball.prototype = {
	remove: function() {
		this.output.remove();
	},
	move: function() {
		// apply gravity
		this.velocity = this.velocity.add(GRAVITY.scale(0.1));

		// collision detection against world
		if (this.position.y > world.y2) {
			this.velocity.x2 = -this.velocity.x2 * FRICTION;
			this.position.y = world.y2;
		} else if (this.position.y < world.y1) {
			this.velocity.x2 = -this.velocity.x2 * FRICTION;
			this.position.y = world.y1;
		}
		if (this.position.x < world.x1) {
			this.velocity.x1 = -this.velocity.x1 * FRICTION;
			this.position.x = world.x1;
		} else {
			if (this.position.x > world.x2) {
				this.velocity.x1 = -this.velocity.x1 * FRICTION;
				this.position.x = world.x2;
			}
		}

		// update position
		this.position.x += this.velocity.x1;
		this.position.y += this.velocity.x2;

		// render
		this.output.css({
			left: this.position.x,
			top: this.position.y
		});
	}
};

This defines a Ball constructor, which initializes a new Ball at some arbitrary position, with some velocity to the left. It also creates a simple DOM element that we use for output.

The prototype of Ball has two methods. The remove method just removes the DOM element, which we use for cleanup. The move method is much more intersting: It gets called for each ‘tick’ of our animation loop, so we use it to update the current velocity, look for collisions, update the current position and render the result. Step by step:

this.velocity = this.velocity.add(GRAVITY.scale(0.1));

This adds GRAVITY to the balls velocity. While GRAVITY has a real world value, we need to adapt it to our pixel-based dimension. Doing this in every tick causes the ball to accelerate downwards, or when moving upwards, to deccelarate. With this alone our ball would start falling, but never stop. That’s where the next block comes in, the collision detection:

if (this.position.y > world.y2) {
	this.velocity.x2 = -this.velocity.x2 * FRICTION;
	this.position.y = world.y2;
} else if (this.position.y < world.y1) {
	this.velocity.x2 = -this.velocity.x2 * FRICTION;
	this.position.y = world.y1;
}
if (this.position.x < world.x1) {
	this.velocity.x1 = -this.velocity.x1 * FRICTION;
	this.position.x = world.x1;
} else {
	if (this.position.x > world.x2) {
		this.velocity.x1 = -this.velocity.x1 * FRICTION;
		this.position.x = world.x2;
	}
}

Here we compare the current position of our ball to the dimensions of the world. For each direction, there’s a check if the call is beyond the limit, if so, it inverts the velocity for that direction, while applying FRICTION. This causes the ball to bounce back slightly slower then it was before, simulating very primitive friction. To avoid glitches, where the ball goes beyond the world dimensions and doesn’t come back, the position gets updated to move it back inside the defined limits.

Now that we’ve updated the velocity (and fixed the position in case of a collision), we can update the resulting position and output it:

// update position
this.position.x += this.velocity.x1;
this.position.y += this.velocity.x2;

// render
this.output.css({
	left: this.position.x,
	top: this.position.y
});

This adds the velocity components to the position of the ball, then uses inline styles to update the position in the DOM.

Next we’ll look at the setup and animation loop:

var balls = [];
balls.push(new Ball());

// animation loop
setInterval(function() {
	balls.forEach(function(ball) {
		ball.move();
	});
}, 25);

Here we create an array of balls and add one initial Ball. Then start an interval to at 25ms, which should give us about 40 frames per second (fps). To get more smooth 60fps, we’d have to go down to 16.5ms, which would also be even more CPU intensive then this becomes with lots of balls.

Inside the interval, we just loop through all balls and call the move method for each. In a proper game engine, this loop would separate the position updates from the rendering to ensure that, when frames get dropped, the game itself doesn’t slow down.

Up next, we’ve got the code to add new balls, with user controlled initial velocity:

var start;
$(document).mousedown(function(event) {
	start = new Point(event.pageX, event.pageY);
}).mouseup(function(event) {
	var end = new Point(event.pageX, event.pageY);
	var ball = new Ball();
	ball.position = end;
	ball.velocity = start.relative(end).scale(0.2);
	ball.move();
	balls.push(ball);
});

Here we bind mousedown and mouseup events, each time creating a Point object from the pageX and pageY event properties. In the mouseup handler, we then use the end point as the starting position for the new Ball object. Using Point’s relative method, we calculate a vector between those two points, scale it down and use it as the velocity for the new ball. That way, you can just click anywhere to add a new ball, or click, drag and let go to create one with intial velocity based on the drag. To get the ball animated along with the others, its added to the balls array.

With that we’re almost at the end. The last piece just clears all balls when pressing Escape:

$(document).keyup(function(event) {
	if (event.keyCode === 27) {
		balls.forEach(function(ball) {
			ball.remove();
		});
		balls.splice(0, balls.length);
	}
});

And that’s it! Thanks to Vector, we’ve got a pretty sane implementation, and a good starting point for further improvements. And there’s lots of potential:

  • Better friction simulation: Currently balls keep bouncing pretty often, they don’t slow down as much as they should after loosing some height.
  • More collision detection: Detecting collisions with other balls, the mouse or other objects would make the whole thing a lot more interesting.
  • Better collision detection: Currently collision detection just happens against a fixed position, not the actual ball’s dimensions. Taking the (rounded) borders into account would make things quite a bit more complicated, but also more realistic.
  • More moving objects with other shapes: Currently there’s just pixels bouncing around, even though they’re rendering as balls. Adding square objects both animated and static could make things a lot more interesting.
  • 3D: Moving from a 2D to a 3D simulation involves adding another component to both the Vecotor and Point class, and would add that third dimensions to each calculation, making especially the collision detection, already the most complex part, even more complex.

With this, I’ll get back to working on the basics for my flocking simulation.

Home Theater PC

At some point I was planning on building a Home Theater PC (HTPC), but so far didn’t get anywhere. The setup I have right now, my regular PC (mostly a gaming machine) plugged into the TV and some silly cable setup to another 2.1 audio system, is working well enough.

So if this is something you’re interested in, here are some resource I gathered on the topic. To start, Martin Fowler provides a good overview of various options. He also links to Jeff Atwood (aka codinghorror), who built his own system.

A friend of mine got a setup consisting of a jailbroken Apple TV (on amazon.de, amazon.com) and a Synology NAS (on amazon.de, on amazon.com). The Apple TV runs the frontend software and streams data from the NAS. After a Jailbreak, you can put BoxeeXBMC on it, otherwise you’d be restricted to Apple’s network storage options. The Apple TV is quite cheap, the NAS isn’t, and you also need to buy disks.

Maybe I’ll revisit this sometime next year, for now my regular PC with Zoomplayer does the trick. What would you recommend?

Using junction links to backup savegames via Dropbox on Windows

This is another “I never want to look this up again” blog post, maybe someone finds it useful.

My usecase: Automatically back up savegames through Dropbox, on the one hand as an actual backup, on the other to share it across computers. To do this on Windows, you need a junction link. In this case, I moved Skyrim’s Saves folder to a Dropbox folder, then created a junction link to point back at Skyrim’s folder. I don’t want to share the init files across computers – if you need that, just go a level higher.

C:\Users\joern>mklink /j "Documents\My Games\Skyrim\Saves" Dropbox\savegames\Skyrim\Saves

More information on the mklink command, or on NTFS symbolic links in general.

Notes from Velocity Europe

Early November I attended the first Velocity Europe conference. This was my second conference where I attended, didn’t speak and paid for the ticket myself (Mobilism 2011 was the first, and I already bought a ticket for the 2012 edition…). There’s a lot to say about the conference, though there don’t seem to be a lot of blog posts about it (with this nice exception). You should watch the talks online, as far as available.

During the conference I scribbled down various notes that I’m reproducing here. Some notes didn’t make sense anymore, or I couldn’t remember their context. For others, I gathered more information on the topic since the conference. Maybe there’s an idea or two that fancies your interest. Let me know if you want to have more details on any of them. In chronological order:

  • “It’s not in production if it’s not monitored” – this was mentioned in the keynote on the first day. Useful as a little thought exercise: How many servers do you run that aren’t monitored? Where monitored means: How long does it take for you to realize that something crashed? Services like pingdom are, on its own, not enough – maybe your service is still up and serves nice HTTP 200 responses, but the content is just blank.
  • Related to the above, you should have a monitoring dashboard that everyone on your team can access, no matter where they are located. For an office, put up a monitor that shows the dashboard during the day, visible for everyone.
  • Performance regression testing: Building a site that’s fast is one thing, making sure it stays fast another. Usually no one complains about a slight degradation in site performance, so your site’s performance ends up as the boiled frog. Tools like Page Speed and WebPagetest with their public APIs could be an option: As part of your delivery process, deploy your product to some public location and have those tools look for problems. Make a list of acceptable issues and see if the score gets worse. If you can’t deploy to a public site, you need to set them up yourself. Which might make sense in either case, to write custom performance regression tests.
  • Page Speed Online also supports a “mobile” mode. For m.soundcloud.com, the result for mobile is actually worse then for desktop (84 vs 95).
  • JavaScript pays a price in performance for dynamic typing, and one solution to that are typed arrays. Those are basically much more memory efficient compared to a regular array, and where useful, the type restriction is not a problem. Typed arrays are required to make things like WebGL feasible, but could also be interesting for more involved canvas renderings. For any large array of numbers, check if a typed array type is available, and wrap your array.
  • Kind of related to JavaScript performance: When you’ve got an array, don’t add any other properties to it (only regular array items). Otherwise you’ll loose a lot of JIT-optimized performance.
  • The _super and _superApply methods, upcoming in jQuery UI 1.9’s widget, now don’t require the method argument anymore. An oversight from earlier changes, for some reason became apparent during the conference and was fixed recently.
  • Apparently switch is bad for various reasons (can’t remember the context), but the suggested alternative was interesting. Nothing new per se: Use an object with a property for each case. But here’s the kicker: Add a _default property and call that if the property you’re looking for doesn’t exist. I can’t remember the exact example, but it looked something like this (not tested).
    Update: As Rick Waldron puts it: “modern jit and method tracers will black list code that uses switch with string cases (non deterministic)”. The thing was from Mathias Bynen’s talk (slides), his code examples looked a little bit different.
  • Someone used a comic from Questionable Content in their slides. I can’t remember which one. It was funny.
  • Also funny, but in a very different way: the keynote on the second day by Artur Bergman. He cursed a lot and even called out nodejs as having shitty code (he still likes and uses it). They’ve got a service called fastly, kind of a CDN, it probably also sucks, but maybe a little less.
  • There was a talk about the upcoming Amazon Silk browser. By now there are actual reviews of the Amazon Kindle Fire using that browser, and they seem to be generally disappointed, as the accelerated mode doesn’t deliver on the promise. Which actually makes sense, at least according to Steve Souders (he’s co-running the conference…). The note I wanted to share here though goes even beyond that: Amazon already sells a lot of their webservices, and there’s a lot of potential to add the Silk infrastructure to their offerings. Say they never figure out a way to accelerate SSL connections: Just host your site on the service (you may already be doing that anyway), and move the optimizers in front of the SSL termination (or behind, depending on your perspective). Instead of carefully hand-optimizing your site, you could just leave it up to them. Similar to using mod_pagespeed, but more cloudy.
  • Chrome Frame is awesome and you should use the forced mode to get more users to install it. I’m not a fan of Alex Russel, but I’m very thankful for his work on Chrome Frame.
  • Languages with Gargabe Collection built in are nice, but once you screw up, they have to stop the world to collect your shit, alos known as “embarassing pause”. Luckily for you JavaScript engines are now getting incremental garbage collectors, so you can be even more sloppy.
  • Law of AppStores: Money made by Apple equals amount of money lost by others trying to make money with an appstore. Maybe. Anyway, installable web apps kind of make sense when you consider permissions: You don’t want any site to store 1GB of data on your computer, you also don’t want  the one site you use every day to ask you again and again for the permission. So “installing” a web app to give it more permanent permissions seems reasonable. But then having a central location to “discover” those applications is really stupid and doesn’t make sense outside of Apple’s little world. That’s where the Mozilla proposal, while still in the very early stages, makes so much more sense: Define a standard that allows a site to ask for “installation”, along with the one-off permission it wants. But instead of putting all apps into a single “store”, just add that metadata to your site and let the user make use of it when he visits it anyway. Mozilla’s page on that doesn’t tell you that story as far as I can tell, but I hope they get there. At least Chris Heilmann explained it like that, and it made a lot of sense to me.
  • Another project that Chris mentioned was Tilt. Its a Firefox extension that visualizes DOM nesting by rendering it in 3D. Still need to try it myself, but the examples that Chris showed were pretty cool, including “facebook city”. Imagine lots of really high towers…
  • Firebug is dead. It may still walk, but Mozilla is now finally working on their own developer tools, integrated into the development process. You can already see the new console in action, on Mac via Cmd+Shift+K.
  • On CSS performance: Using text-indent with a big negative value to hide text in favor of the background-image is a common practice. On mobile it can actually hurt performance, as the element gets really big and takes up a lot more memory then necessary. Hard to find anything about that issue on the internets, so for now just a link to Estelle’s slide on that.
  • In the same talk, Estelle talked about reusing a pool of DOM elements (also on the next slide). I’m a bit suspicious about the cost of creating DOM elements being the bottle neck in the scenario here, which is about rendering really long lists of items. Mobile memory is very limited, so you have to remove stuff from the DOM in order to render new elements, but I don’t quite see how reusing those elements would make much of a difference. Anyway, needs research, could be a useful technique.
  • Estelle also covered, very briefly, the technique used on the mobile bing site to deliver a fast first response combined with custom caching. The idea is to inline JS and CSS (along with inline images) on the first request, as regular script and style elements. Once loaded, the content of those elements gets stored in localStorage. In addition, a cookie is set to indicate what resources have been stored. On the next request, the server takes that cookie as an indicator to figure out what resources to serve: None if everything is cached, only specific files if the cached version is outdated. Estelle also mentioned that the mobile bing site works only for US IP addresses, so not trivial to reproduce. Worth more research.

A good conference. Met a good bunch of people that I’ve met before, and some interesting new figures. I didn’t gather enough contact details to stay in touch with them though.

PS: Refuse to let T-Mobile manage your conference wireless networks.