Shadow AI – aka the use of AI tools without formal approval – poses unique risks in the publishing industry that could lead to legal issues down the road if not properly addressed.
The use of AI involves several risks, including bias of AI models, confidentiality, and accuracy concerns. Gary Kibel, a Davis+Gilbert Privacy, Technology + Data Security partner, was quoted in this Digiday article explaining that sharing copyrighted data with AI could potentially cause legal issues, noting that “the publisher would be liable for either infringing on copyright or generating infringing content.” The accuracy of information from an AI tool that has not been vetted can also cause issues to arise. “If you input into an AI platform, ‘If CEO Jane Doe did the following, what would that mean?’ and then the AI platform rolls that into their training data, and it comes out in someone else’s output that the CEO Jane Doe did the following… they may come to you and say, ‘How in the world did this get out? I told only you,’” Gary states.
For more information on the legal risks of shadow AI in the newsroom, read the full Digiday article below.