AudioShake Raises $14 Million Series A Led by Shine Capital to Expand AI Audio Separation Technology

audioshake

AudioShake, a San Francisco-based startup developing AI-powered technology that deconstructs audio recordings into their component stems (such as vocals, instruments, sound effects, and dialogue), has secured US $14 million in a Series A funding round. The round was led by Shine Capital, with participation from Thomson Reuters Ventures, Origin Ventures, Background Capital, and returning investors including Indicator Ventures and Precursor Ventures.

Since its founding in 2020 by CEO Jessica Powell (formerly of Google) and CTO Luke Miner (formerly head of data science at Plaid), AudioShake has developed deep learning models that can separate any audio file—even decades-old recordings or physical-world audio—into high-quality isolated tracks. This enables music labels, film/TV studios, voice-AI firms, game developers and publishers to remix, localize, dub, analyze and unlock new uses of audio content that was previously fixed and uneditable.

The company reports that in the past year it achieved nearly 400 % year-over-year revenue growth, processed over 100 million minutes of audio, and signed more than 40 enterprise contracts, including major clients such as the world’s largest music labels and film studios. These statistics reflect strong demand for audio-separation tech across entertainment and AI-model-training pipelines. The new funding will be used to accelerate product development, expand hiring in engineering and sales, increase access to its APIs and real-time SDKs for developers, and scale its go-to-market presence globally.

AudioShake describes its ambition as making audio “as flexible as text or images” by enabling creators and machines alike to edit, search, interact with and repurpose sound in ways previously impossible. The platform’s ability to operate on existing single-track recordings opens up vast legacy catalogues for remixing or immersive audio experiences, enables dubbing and localization for film/TV, supports voice-AI training via clean separated tracks, and allows sports leagues and broadcasters to isolate audio elements (e.g., stripping unlicensed music from clips).

Investors appear to recognize both the creative and enterprise-AI potential of the technology. Shine Capital’s general partner noted that AudioShake is “building the foundational layer” for a new era of audio interactions. Thomson Reuters Ventures emphasized that audio remains one of the last frontiers of unstructured data for organizations, and AudioShake’s tech transforms that static asset into actionable, programmable data.

Despite the momentum, AudioShake faces challenges typical to enterprise-AI infrastructure: scaling its model accuracy and throughput at global scale, managing complex content-owner rights and licensing issues when providing stem separation services, securing continued adoption by large labels, studios and broadcasters, and demonstrating sustained commercial value for enterprise customers beyond early use cases. As it moves ahead, the company must continue to grow its developer ecosystem, deepen integration into production workflows, and expand internationally.

With the Series A funding secured, AudioShake is positioned to advance from early adoption into broader deployment. The injection of capital gives the company the ability to build out its engineering and product capabilities, scale operations, and execute on its vision of transforming how audio is created, consumed and processed by both humans and machines. In doing so, AudioShake stands at the intersection of creative production, media engineering and AI infrastructure—and may play a key role in defining how sound will be manipulated in the age of generative systems.

Share this:

Related Articles