Following a Quora thread asking exactly how hard it would be to implement an Apple Siri clone from scratch, NiobiumLabs alongside some students of the National Technological University of Athens decided to have a bash at exactly that.
Utilising freely available open source tools, the team managed to create a working clone of Siri in just an afternoon. An impressive feat made possible partly to OpenEars an open-source iOS library for implementing round-trip English language speech recognition and text-to-speech on the iPhone. Combining that with the Wolfram Alpha API they managed to emulate the Siri user experience, emphasizing on the artificial intelligence part of information retrieval process.
You can grab the code for yourself here, as it has been released freely on GitHub.
Whilst localised in-app speech recognition is not yet possible with the hardware limitations of the iPhone (Siri works by processing speech recognition server side before sending the result back to the client), undoubtably we will see the trend of quality voice recognition APIs in mobile apps continue, particularly as recognition databases grow in size and become more accessible to developers.