Despite the advancements in artificial intelligence, some technologies like the OpenAI Voice Engine are quite dangerous. This technology helps users to generate a clone of anyone’s voice and use this clone to convey information. According to those working on this technology, it is too risky to be made available for public usage at this point.
Back in 2022, this AI technology made its debut on ChatGPT as a text-to-speech feature. However, the usage of this feature has been limited to only those at OpenAI as the public can’t access this technology. The main reason for this is that such voice cloning technology can easily be put to harmful use by some on the internet.
OpenAI hopes to work with the necessary bodies to make rules that’ll guide the usage of this kind of technology in the public. Already we have seen the abuse of voice cloning artificial by various bad actors on the internet. Last year, we saw the cloning of various popular artist voices for use in creating music on various streaming platforms.
There’s also the fear of this kind of technology getting into the hands of bad actors for use in various scam activities. While there are a lot of bad things that this technology can be used for, there are also a ton of good applications we can point out. Some firms and institutions around the world have already begun putting voice cloning AI technology to use for various good reasons.
While access to the OpenAI Voice Engine remains limited, the firm hopes to explore ways to bring it to the public. Once available, there will be a ton of ways to tell that an audio script was created using an AI voice cloner. OpenAI might rely on watermarks and other things to identify voice clones.