
ChatGPT hallucinated about music app Soundslice so often, the founder made the lie come true
In a bizarre turn of events highlighting the unpredictable nature of artificial intelligence, Adrian Holovaty, the innovative founder behind the music-teaching platform Soundslice, recently found himself solving a perplexing mystery that had been quietly plaguing his company: persistent, erroneous claims made by ChatGPT about his application’s capabilities.
What began as a peculiar stream of unusual images—screenshots of ChatGPT sessions—being uploaded to Soundslice’s servers, soon revealed a deeper truth: ChatGPT was inadvertently acting as a powerful, albeit dishonest, promoter for his platform. The AI chatbot was repeatedly fabricating a feature for Soundslice, a music app celebrated for its ability to synchronize video with interactive music notations.
Holovaty, a prominent figure in the tech world known as one of the creators of the open-source Django web development framework, launched Soundslice in 2012. The platform, which he proudly describes as “bootstrapped,” is widely utilized by music students and teachers. Its core innovation lies in its interactive video player, seamlessly integrating video playback with synchronized musical scores to guide users through pieces. Additionally, Soundslice boasts a robust “sheet music scanner” feature, an AI-powered tool that converts images of traditional paper sheet music into dynamic, interactive digital notations.
It was through monitoring the error logs of this very scanner that Holovaty first noticed the anomaly. Instead of the expected images of sheet music, his logs were inundated with screenshots depicting text-based ASCII tablature—a simplified system for guitar notation using standard keyboard characters—generated within ChatGPT sessions.
Initially baffled, Holovaty recounted his weeks of confusion in a detailed blog post. “Our scanning system wasn’t intended to support this style of notation. Why, then, were we being bombarded with so many ASCII tab ChatGPT screenshots? I was mystified for weeks — until I messed around with ChatGPT myself.”
His experimentation with ChatGPT quickly unveiled the source of the mystery: the AI was confidently informing users that they could upload these very ASCII tablature screenshots to Soundslice to magically transform them into audible music. The problem? Soundslice simply did not possess this functionality. Users were arriving with an expectation that the app couldn’t fulfill, leading to potential disillusionment and a “reputational cost,” as Holovaty described it to TechCrunch.
Faced with a choice—either implement disclaimers across the site stating what Soundslice *couldn’t* do, or embrace the challenge and build the feature ChatGPT was hallucinating—Holovaty opted for the latter. Despite never having considered supporting the niche ASCII tablature system before, he decided to develop the capability to convert these text-based notations into interactive, hearable music within Soundslice.
This decision, while ultimately beneficial for users, presented Holovaty with a unique dilemma. “My feelings on this are conflicted. I’m happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?” he pondered. He even questioned if this marked the first documented instance of a company being compelled to develop a feature due to widespread AI misinformation.
The tech community on Hacker News offered a compelling parallel, suggesting that ChatGPT’s behavior was akin to an overly enthusiastic human salesperson making grand, unfulfilled promises, thereby pressuring developers to innovate. Holovaty, a seasoned developer himself, found this comparison both “apt and amusing,” acknowledging the strange new frontier AI hallucinations are carving out for product development.



