A system that hears and thinks like humans.
What we do
We are surrounded by millions of different sounds that contain important clues about our surroundings. For example, if you hear someone screaming, you know that there is an emergency. If someone coughs a lot, it means he/she got a bad cold. How great would it be if AI could understand all these “sounds”?
Cochlear.ai delivers a top-quality machine listening technology to solve issues and challenges around the world. We provide our technology through easy-to-use cloud API and Edge SDKs which can add hearing ability to any devices or applications, with just a few lines of codes.
Acoustic Scene/Event Identification
Humans can understand the environment through various sensory inputs and hearing is one of the important human senses. We extract the information about acoustic scene and events, which are the most critical information for AI to understand the surrounding context.
Music Information Retrieval
Music is one of the unique forms of audio. Extracting high-level information such as tempo, downbeat, instruments, and mood from the raw audio signal can open up the new possibility for creative music applications and context-aware systems.
Speech recognition is used in various AI applications to understand the command. However, voice contains much more hidden information about the speaker such as age, gender, and emotion, which is highly useful for understanding the current status of users.
Cutting-edge sound AI research
Cochlear.ai team achieved top ranks in all tasks of IEEE Detection and Classification of Acoustic Scenes and Events (DCASE) Challenge 2017, the biggest competition in the machine listening field. In 2018, our team secured top rank again on DCASE General purpose audio tagging, among 558 teams on Kaggle.