Business

India needs to create content tracking system for GenAI: Report

New Delhi, Sep 19 (IANS) Artificial intelligence (AI) content detection system will foster trust in India’s AI ecosystem and promote the safe adoption of the emerging GenAI technology, a report showed on Thursday.

One such solution, watermarking, allows developers to control the watermark encryption, ensuring better content detection and authenticity of AI-generated content, according to the report by EY-Ficci.

According to Rajnish Gupta, Partner, Tax and Economic Policy Group, EY India, watermarking is a key solution that enables developers to encrypt watermarks, providing better detection and authenticity of AI-generated content.

“Robust watermarking must resist tampering, and detection systems should maintain low false-positive rates while functioning across different Gen AI platforms,” said Gupta.

India aims to lead in this area by promoting the development and adoption of advanced watermarking techniques to ensure secure and authentic AI content creation, Gupta added.

As AI technologies become more advanced, concerns are growing about the source and authenticity of digital content.

AI content detection tools are essential for establishing authenticity and maintaining the integrity of AI-generated content.

As AI evolves, it becomes increasingly challenging to differentiate between human-created and machine-generated content, leading to potential issues such as misinformation, copyright infringement, and a loss of credibility in digital content.

Jyoti Vij, Director General, FICCI, said that as generative AI reshapes the digital landscape, “we as a nation must come together and establish robust safeguards to trace content origins, ensuring transparency and trust in the AI-driven world”.

“Let us lead with responsibility for a secure and innovative future”, said Vij.

Governments worldwide are recognising the potential of watermarking technologies. Taking a lead, India can play a pivotal role in shaping a robust domestic digital ecosystem that is secure, transparent and trustworthy, the report mentioned.

As AI technologies evolve, generating text, images, videos, and audio that are often indistinguishable from those created by humans, there is a growing concern about the authenticity and source of the content which may lead to deepfakes, copyright infringements, fake news, social manipulation, and false attribution and lack of content detection.

–IANS

na/

Related Articles

Back to top button

You cannot copy content of this page