Copyright and privacy issues are blurring in the digital era

copyright privacy personal data
Image by TierneyMJ | Bigstockphoto

We’ve seen the growing popularity of AI apps that can generate art, text, music and even biometric data. These are already raising copyright issues, but in reality the copyright, privacy and intellectual property landscape overall was already becoming much more complicated before these apps arrived. And things will become even more complex once our physical appearance and voice can be used in virtual reality, with the line between copyright and privacy issues becoming increasingly blurred.

The digital world has been challenging our traditional views of copyright, intellectual property and privacy in many ways. Among the latest examples is the case of Mason Rothschild, an artist who created some NFTs called ‘MetaBirkins’, inspired by Hermès’ Birkin bag. Hermès swiftly sued him over the project, arguing that the company’s trademark was being diluted and that potential consumers might be fooled into buying unaffiliated virtual goods. A federal jury in Manhattan agreed, determining that Rothschild had infringed the company’s trademark rights and awarding Hermès $133,000 in total damages.

However, the case potentially has larger implications beyond the specific art project in question, as the verdict impacts how digital content (or at least NFTs) can be made from copyrighted items. Rothschild insists that he is an artist, that what he made was new creative content, and that he should have freedom to create this kind of content. According to the New York Times, Hermès commented on the verdict that it is “a house of creation, craftsmanship and authenticity which has supported artists and freedom of expression since its founding.” Rothschild’s lawyers called it a “great day for big brands” and a “terrible day for artists and the First Amendment.”

The complexities of copyright

Then there is the question of whether what we post on social media is fair game for other media. Former F1 driver and world champion Kimi Räikkönen published a post on Instagram in which he was standing next to a snowman. Leading Finnish newspaper Helsingin Sanomat published an article about Räikkönen’s Instagram story and ran a screenshot from the photo on his account.

Räikkönen took the case to a market court, which ruled that the publishing company Sanoma (owner of Helsingin Sanomat) violated Räikkönen’s copyright (article in Finnish). The court reasoned that the photo was not actual news content that the media has the freedom to use – the paper could have written a story about Räikkönen’s Instagram story without publishing the photo. Sanoma must pay 7,000 euros in compensation to Räikkönen, as well as his legal costs of 111,000 euros.

AI-generated content creates an additional dimension to the copyright debate. AI apps like DALL-E and ChatGPT are already demonstrating how a machine can not only create content, but also emulate the style of named artists or writers. If ChatGPT writes a story in the style of Stephen King, who owns the copyright – King, ChatGPT or the person instructed ChatGPT to write the story?

Then we get into things where AI can take our photos, voice and wearable data to create a digital twin of ourselves – as a metaverse avatar, for example. Who owns the resulting avatar? And who owns the data used to create it?

Entangling copyright and privacy laws

The cases above demonstrate how digital content has already made questions over content, data ownership and privacy more complicated. Every day, we hear about new situations that lawmakers never dreamed of when those laws were written and adopted. Similarly, soon we might have new data ownership issues we cannot think about now.

I wrote earlier about how the ownership of our biometric data and physical appearance in metaverses and AR/VR will soon become important copyright and privacy issues. For superstars and celebrities, it is already a copyright question. For ordinary people, it will probably be more of a privacy and security issue, particularly for things like sensitive data and data used in security applications.

One result is that copyright and privacy law – which traditionally have covered their own distinct domains – are becoming increasingly entangled. Data can be collected from anywhere and used for many kinds of purposes, from digital art to biometrics. When we ask who owns the personal data used to create content, we’re not just asking who owns the content, but who owns the data, which raises privacy questions as well as copyright ownership claims.

Are public social media posts fair game?

Copyright laws try to balance intellectual property and free speech, but this gets complex when people publish their own content on social media. Social media services determine in their terms and conditions how they have rights to material published on their platforms. They allow users to re-share content inside the service – but that doesn’t mean the content can be freely published elsewhere. 

If we take the example of public figures (like celebrities, rock stars, politicians or business leaders), they traditionally have less protection when it comes to privacy because they choose to live a public life.

This has meant that the media has more freedom to report on whatever they do and say in public. At the same time, however, that doesn’t mean the media can use copyrighted material by public figures as freely, apart from legal provisions for ‘fair use’. But where does “fair use” play when it comes to celebrities posting photos on Instagram? It’s not so clear.

In practice it is very hard to monitor the use of content when anyone can easily copy it or take a screenshot and re-publish it somewhere. There are some services that help track the use of content, but more scalable solutions are needed.

Looking beyond the legal aspects

It is normal for new technology and opportunities to create needs for new laws, and the laws often fall behind technology development. On the other hand, the questions about digital data and content aren’t just legal ones. They are also very much linked to very basic human and digital rights when it comes to personal data. They are also linked to technology and available services.

For example, it’s currently easy for anyone to create digital content, but it is harder to manage its use, distribution and copyrights. Even if a law protects something, if individuals have no tools to execute and protect their rights, it doesn’t work in practice.

We need more than new laws and court decisions for these new situations. We also need technology, platforms and services where people can not only utilize, but also own and manage their own content and data, and distribute it on their own terms.

And that includes to what extent an AI program can trawl their digital content and data to create something. We need special services to manage AI-modified or created content, as well as AI tools to track content copyrights for original content and its derivatives to ensure the people who own that data can control how and when AI programs can use it and be compensated accordingly.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.