We solve problems.
Through a customized approach, we solve problems by using state-of-the-art automatic speech recognition and natural language processing techniques. Our scientists use everything from linear regression to deep learning and have the freedom to explore various machine learning algorithms to deliver the best results.
Canary Speech technology is patent protected, with one issued US patent and five additional patents pending in the US and internationally. We are on the cutting edge of a major medical/technical breakthrough that has the potential to positively impact the lives of millions of people, reduce costs, expand tele and remote medical services, provide for screening for a range of diseases and consequently enabling people and organizations to improve quality of life.
Get in touch.
We are constantly creating new strategic relationships with highly respected organizations who have been key in accelerating FDA approval of this technology for drug discovery, clinical diagnostic use, medical screening and concussion assessment.
The strategic partnerships span the market verticals of healthcare, pharmaceutical, health and wellness, including concussion. If you’d like to discuss a potential partnerships, please reach out.
contact us ▸
Meet our team.
CEO, Board member
Henry O'Connell has over 20 years of executive and C-level experience. Following graduate school,O'Connell began his career at the National Institutes of Health in a neurological disease group and continued on to a successful business career specializing in turnaround situations in the tech industry. He has served on several boards in both the private and public sector. O'Connell's vast experience spans globally, as he has managed companies in North America, Europe, and Asia.
Technical advisor, board member
Jeff Adams is the founder Cobalt Speech and Language, a team of elite speech scientists and engineers that build custom applications. Adams is a top-level Speech and Language Technology who has completed over two decades of research. His experience has been spent on the groundbreaking projects of Kurzweil AI, Nuance / Dragon, Yap, and Amazon. Adams is the author of 25 patents and several published research papers.
Phillip Walstad is a seasoned global operations and technology executive with more that 23 years of managing large and diverse organizations. He is exceptional technologist, overseeing dozens of successful technology product and application development projects, consumer product development initiatives, e-learning initiatives and e-commerce initiatives. Walstad spent 5 years in Japan overseeing operations, web services and technology product divisions – sharing responsibility for more than $600 million in annual sales.
Vice President, Engineering
Kevin Yang received B.S. in EECS from the University of Michigan. After working at MIT Lincoln Lab, he was one of the first computer scientists to join the new Amazon Echo speech team in 2012, where he was one of the engineering stars and contributed in key areas to the Amazon Echo. The Echo is considered the most advanced application of speech and language technology in recent years. He is a graduate of the University of Michigan.
Chief Speech Scientist
Jangwon Kim, PhD
Jangwon is an expert in multimodal signal processing, speech recognition, speech production, and machine learning for speech processing. He is interested application that includes robust Automatic Speech Recognition (ASR), affective computing, Human-Computer Interface (HCI), computational paralinguistic, healthcare, security, and defense.
Kwon is a data scientist with comprehensive data analytical knowledge and extensive development experience. She has in-depth knowledge and experience with advanced machine learning approaches and its application to real-world problems. Kwon has researched and developed natural language processing products. She is a “big data” specialist skilled with Hadoop and NoSql databases. Kwon is the leader of data analysis and prediction projects.