The opinions expressed in this article are the writer’s own and do not reflect the views of Her Campus.
As an engineer, I still find it magical that we live in a reality that was once portrayed as the future in the movies and TV shows I grew up with. Having taken a course or two on AI and Machine Learning, I know that the science behind it is not as pretty as the final product. It is a whole lot of math that neither I nor the most proficient find particularly fun. However, we are living in a day and age where someone without a degree in STEM is still able to manipulate, use and apply AI tools. The ungoverned access and use of AI is heading down a precarious path, where do we draw the line when it comes to its ethics?
Recently an email I received from my university spoke about the use of tools like ChatGPT in assignments and university work. My parents have always pointed out the privilege I have with technology like Google while they had to rely on physical references at libraries. Fast forward to today, ChatGPT is capable of writing your entire assignment for you, completely eliminating the need to even Google things. There is not a department of my university that is unaffected by this. ChatGPT can write your history essay and finish up that coding assignment. While softwares like Turnitin are catching on to AI-written pieces, there has been the rise of a counter software that you run your AI-written piece through to make it undetectable by Turnitin. While drawing the line on what AI should be capable of doing has been debated through the years, where do we students draw the line in using it?
Deepfakes are another AI-powered tool that had me questioning the usage of AI and its ethical principles. Earlier this year, Meta was caught up in a scandal where deep-faked videos of female celebrities were used in ads that were up on the platform. As I read an article about this, I had chills down my spine as I realised the repercussions of technology like this being used by the wrong hands. The world is only too familiar with the likes of blackmail and revenge porn; now, even if you are careful, you are never truly safe from it. Especially as a woman, this is truly terrifying.
Deepfakes have recently found other applications that also had me questioning the ethics of it all. Roadrunner, a documentary on the late Anthony Bourdain’s life, is famed for a few lines that you hear Bourdain say. It is not famed for what he said but because it wasn’t said by him at all. It was a voice generated using AI by a speech company that trained a model to mimic his voice. While consent, or the lack of, was debated about the deep-faked ads on Meta, the question of consent cannot even be in play in this particular case when the person in question has passed on. A similar situation I came across was in the Korean show, One More Time, which features holograms of artists that have passed on performing their songs. Similar to the Anthony Bourdain case, the very first episode of One More Time featured the artist Turtleman in hologram form performing a song he had never performed. In fact, it was a song released over 10 years after his passing. This didn’t exactly feel right to me, but then again, we haven’t really defined what the ethics of AI should look like in the era of Deepfakes and ChatGPTs.
I recall learning about Issac Asimov’s “Three Laws of Robots” in the movie I, Robot. To sum it up, it mainly covered how a robot shouldn’t have the ability to hurt humans, and initially, I think that was the worry most of us had—creating something along the lines of a Terminator that would take over the planet. But perhaps our imagination has gone too wild with that. Instead, the real worries should lie with what we have ended up creating, and it is possibly far worse than a Terminator. The danger it has brought about may not be in a physically violent manner, but a more ethical, emotional and mental one. As an engineer who is passionate about using technology to make a positive impact, I cannot bring myself to say we must stop the work we are doing with AI. The tools it has brought about, like early cancer detection tools, have the greatest impact. We’ve spent a lot of time pondering where to draw the line in terms of creating AI, but perhaps it’s time we draw the line for how we should and can use them instead.
The newsletter you won’t leave unread.
Nanyang Tech '23
(but via email)
The newsletter you won’t leave unread.