Recent developments with OpenAI’s GPT-4 introduced a unique voice assistant feature named ‘Sky,’ which quickly drew widespread interest for its emotional mimicry capabilities. However, a surprising turn of events unfolded as ‘Sky’ came under scrutiny for its vocal resemblance to Hollywood actress Scarlett Johansson.
The integration of voice in GPT-4 echoes the narrative of the 2013 film “Her” where Johansson portrayed an AI capable of emotional connections. As GPT-4 showcased a level of emotional intelligence, the similarity between ‘Sky’ and Johansson’s voice in “Her” became a focal point of discomfort for the actress. Concerns about the inspiration and potential intellectual property issues have been raised following the launch of GPT-4’s voice features.
Johansson voiced her concern about the resemblance between her voice and that of the AI assistant. She was taken aback by the choice of the voice for ‘Sky,’ which appeared to mirror her own despite her previous decision to not lend her voice to the platform. The response from her personal circle and the broader community further cemented the notion that the likeness was unmistakable.
In response to this issue, OpenAI has released statements explaining their selection process for voice actors. They emphasized that the voices were chosen carefully through a rigorous process, and professional voice actors were compensated generously for their contributions. As a resolution to the Sky voice controversy, OpenAI specified that the likeness was unintentional and the result of another professional voice actor’s work, whose privacy they are committed to protecting. They have not revealed further information about the actor behind the voice.
OpenAI has decided to err on the side of caution and temporarily disable the ‘Sky’ voice feature amidst the controversy. While it is not confirmed whether the voice will be permanently withdrawn, OpenAI has taken action to address the situation responsibly.
As the world continues to advance in AI technology, this situation highlights the need to consider ethical implications and potential impacts on individual rights, including voice appropriation and intellectual property considerations. It serves as a reminder that as AI mimics more human attributes, the lines between technology and personal identity can blur, warranting careful consideration and respect for individual likeness.
Engaging with AI is becoming more natural and human-like, and with this progress comes new challenges and responsibilities. Users and developers alike should be mindful of this balance as they navigate the evolving landscape of AI interactivity and personal privacy.






