The beauty of the Internet is that every one can be famous. Create something great and they will come (with a little social marketing). Take for instance World Builder. A short movie by Bruce Branit. In a world without Internet, you wouldn’t even know this movie existed, but now it can be shared freely.
And what a marvel it is. An incredible two years in the making, this movie deserves to be seen by any one and every one.
The Internet doesn’t stop there, to show your appreciation you can become a fan, discuss the movie and get to know the latest news.
This isn’t quite my usual fair, but a good tool is a good tool. If you are working on your new web 2.0 application, integrating various services, there’s always a moment when you just can’t seem to get 2 services to work together. For some reason, on service is requesting the wrong data, or is it the other service that’s replying with the wrong answer?
Only one solution to get to the heart of the problem: You need to isolate both services and test your assumptions. There are various tools that will help you test pure HTML based REST services, but when dealing with a SOAP service, I found the options pretty limited.
Until I encountered soapUI. soapUI is a tool that allows you to import WSDL files and separate client and server. You can create automated tests for the server, but you can also construct SOAP requests by hand. This allows you to play with the parameters and figure out what works and what doesn’t. Once you’ve got what you want, you can automate tests that regularly verify your assumptions of the service (you know, when documents fail).
For the client, you can create a mock implementation of the web service. So even if you’re offline or the server is offline, you can continue developing.
Once upon a time, I was really into beatmixing. I never felt like investing in turntables and huge unwieldy record collections, so I was an MP3 DJ. I used the very affordable and incredibly good AtomixMP3 program.
So I was pleasantly surprised when I ran into Mixxx. It is a little more bare-bones, but it has all the features I need: output to multiple channels or soundcards and beatmatching. The BPM algorithm seems to be a little less sophisticated than the one Virtual DJ is using. But if you’re not happy with the automatic one, you can always tap your own rhythm.
It doesn’t do sound effects and some other fancy stuff, but I hardly ever used them anyway.
So I’m off to hook up my mixing panel and spin those beats.
After last weeks mixed success, I started implementing the more advanced techniques Yov408 describes in his article. However, nothing seemed to improve the calculated beats per minute. I was about to go and implement the Fourier Transform, something which I wanted to avoid in order to keep the algorithm zippy. But I went back to the spreadsheet and discovered a much simpler solution.
The second sample I tried has about 3 or 4 beats too much. Upon closer inspection of those misses, those are all instances where the energy went above the threshold during for only one sample.
Once I understood the nature of the problem, it was easy to implement a solution that only detects a beat when the energy is high enough for a few more samples. I put this into code and was amazed by the results. Pretty much any song I used resulted in a BPM count withing 5 BPM of the actual count.
The adapted algorithm is:
Every 1024 samples:
Compute the instant sound energy ‘e’ on the 1024 new sample values taken in (an) and (bn) using the formula (R1)
Compute the average local energy <E> with (E) sound energy history buffer:
Shift the sound energy history buffer (E) of 1 index to the right. We make room for the new energy value and flush the oldest.
Pile in the new energy value ‘e’ at E.
If ‘e’ > ‘C*<E>’ we detect a possible beat. If a possible beat was not detected in the previous calculation, we start counting, N = 0. Otherwise, N is increased by one.
If N equals a threshold (3 is a pretty good value), a true beat is detected.
Given the simplicity of the algorithm I think this is an incredible result and good enough to move into the next step of the project: Porting this to small devices.
It seems like every one is doing it these days: tilt-shift photography. The technique has existed, probably, since photography. But it has been rediscovered just last year.
If you want to do the “real” thing, you need a pretty expensive camera. You need a special lens and you need a lot of patience to set up the lens just right. So obviously, many tutorials have appeared on the net, explaining how to fake it in some graphics program (usually Photoshop).
But if even that is still too complicated or cumbersome for you, there is now an even more convenient solution: tiltshiftmaker.com.
Cool stuff, too bad it will probably be a little too much work to create a movie with this tool.
Last week I showed the beginnings of my audio analysis program. This week it’s time to talk about the goals. My final goal is to do BPM calculation on various music sources. It should be fairly fast, but there’s no need for a realtime readout. I did some Google-ing, but couldn’t find any freely available Java implementation. So I ended up reinventing the wheel.
I did however look for a little help. Yov408’s explanation on GameDev is an exceptionally well tutorial and introduction into beat detection. If you read the article, you’ll notice that the first thing you need is the energy of the music file. So I added an energy calculating filter to my architecture.
Afterward, I wanted to get some insight into the algorithm, so I took some random samples and put them in a spreadsheet. My test song is one with a very very clear beat, so if my beat detection algorithm works on anything, this will be it.
I did have to expand the algorithm a little, to calculate the actual BPM. The document only describes beat detection, but once you got that far, BPM calculation is fairly trivial (just count the beats and divide by the time).
If you open the spreadsheet, you’ll notice that I got lucky with my first sample. Immediately I calculated a pretty good BPM of the song (it is about 128 BPM), however, when I tried another sample of the same song, the result was completely wrong. But you might notice that if I did not use the adaptive algorithm to calculate the threshold value C.
I tried to implement that, and although the theory sounds good, the results were even worse. I’m pretty sure I need to go over it one more time to figure out the best values for the constants (I have a different input range then the article). But it’s a start.
Next week, I’ll try to tune that adaptive algorithm and hopefully publish my code. If you have some experience in BPM calculation, I’d love to hear what algorithm you used. Because there are as many theories out there as there are people calculating BPM counts.
Upon playing the game, I immediately thought of The Lost Vikings. It’s an old DOS (and other platforms) puzzler game that gives you a team of Vikings that need to work together to obtain their goal. They each have their speciality.
Just as The Lost Vikings, Leaving Loki’s Lockup involves Vikings working together. They are both nicely animated, funny and inventive.
Leaving Loki’s Lockup does fall a bit short in the control department. They aren’t very fluid and it takes a bit getting used to. But once you know the quirks, this is another free casual game that’s very enjoyable.
As mentioned last week, my next project revolves around audio analysis. The first step is acquiring data and for that, I had already found the perfect Java solution. JLayer makes it easy to obtain data, but a sound file contains very large amounts of it. This post goes into a basic architecture to tame that data and get it into a form that can be processed.
JLayer can stream data to its own AudioDevice class. This is a callback class that has hooks for opening and closing a device, which we don’t need. The important hook is the one that sends the bytes to the device. This is where you can capture the raw stream. Most audio analysis, however, doesn’t use the raw stream, but averages the data to reduce the amount of data to process.
Now that my previous project is finished, it’s time to start the next. In an entirely different direction, I will now tackle the field of audio analysis. My first step was to take a look at what’s already out there and I must admit, it was overwhelming. There are a lot of very specialized tools for just one task and there are extremely general tools that throw in everything and the kitchen sink.
Here’s a summary of the most interesting program I found and which I will focus on a little more in the future. You’ll notice that most of the audio tools you can find for free are toolkits with only basic GUI frontends. I didn’t really find anything particularly user friendly.
Vamp seems to be the audio analysis plugin framework that has the most traction within the open source community. It is fairly complicated to get everything configured the way you want, but there are many existing plugins available, so it’s probably worth your time.
Just as Vamp above, Marsyas is a research driven framework for audio analysis and more. It seems to have more industry support, but I guess most of that is closed source. The list of links on their project page is long and intruiging, and definitely worth a little more research.
SoundRuler is one of many tools that offers a view on the waveform, fast Fourier transform (FFT) and some more. What makes this noteworthy is that it is geared at beginners (me) and has a bunch of manuals. However, upon installing I didn’t really find it very userfriendly, but that might be only a first wrong impression. I will go through some of the documentation in the next week.
WaveSurfer is a similar program, but it seems to be more humble in goals and therefor a little easier to interpret and work with.
I’m not sure if the CLAM project will suite my needs, but my interest was tickled by their graphical way of configuring the tools. Certainly something I’ll be writing a little more about later on.
So far I haven’t really found anything that I’m really happy with. I have tried all of the above, except CLAM and then another 5 to 10 that didn’t make the list. The only result is, that I can’t help but feel lost. Those tools dump boatloads of information to my screen with very little context or explanation.
I can understand the raw waveform plot, I can even interpret the FFT plots, but all the other stuff is just voodoo. I have the feeling all those tools were programmed by true audio geeks, with an extremely deep knowledge of waveforms. So if you are one of them, you’ll love them all. But what I need is a tutorial or course in sound analysis and sound properties. How to interpret all the little lines and numbers.
Since I have completed the storyline, this might be my final entry on GTA Chinatown Wars. The game only shows me a 48% completion, so I still got a lot of stuff to do. I don’t think I’ll be going for 100, but I might take on a few additional missions. According to the in-game counter, I’ve put over 10 hours in the game, so I got my money’s worth. However, there are still things I’d like to discuss.
I haven’t talked about the story yet, because I didn’t think it was worth mentioning. It starts of pretty so-so, but in the end things come together nicely and I guess it’s an acceptable excuse to have a big blowout fight. For a game it’s pretty ok story. It might even make a movie worth watching, but it wouldn’t work as a book. As far as storytelling is concerned, game makers are stuck between a rock and a hard place. On one side, you don’t want to spend too much time in cutscenes to show off your story, on the other hand, it’s difficult to tell a story if you only have action sequences. Ever since adventure games and full motion video games went out of style, we only have games like Half-life left who walk this fine line pretty nicely
If you complete the story missions, you will undoubtely for ever remember the last three missions. The game ends in some seriously intensive gunfights. If you don’t like the gunplay in Chinatown Wars at all, I’m affraid you will be a bit bummed by the end. But by the time you make it this far into the game, you’ll probably have a pretty good grip of how to handle the situation. I thoroughly enjoyed the final missions. Althouh I had to replay them quite a few times. The key to victory: take it slow, don’t rush.
After you complete the story, the “Guardian Lions” mission is unlocked. Once completed, you can upload the mission data to the Rockstar Social Club for a bonus. Although it’s a small thing, I really love this online integration (and it cuts back on piracy, so it’s a win-win)
Anyway, if you still haven’t got the game, and aren’t convinced by now, I don’t think I will ever do so, but still: go buy and play the game!