Two recent examples where reliance was placed on Generative AI generated content have highlighted the risks and the consequences when independent checking and verification are not undertaken. One involved two lawyers in the US where closing submissions referred to Chat GPT generated cases that did not exist, and the other involved Australian academics using Google Bard which generated false allegations of misconduct against the big four consultancy firms in a submission to a parliamentary inquiry. We asked Chat GPT to prepare a report on these examples and the risks, which resulted in a typically generic response – you can read what ChatGPT said and how it says the risks can be mitigated in its article which it titled: The Dark Side of ChatGPT: Unravelling the Misuse and Its Consequences Introduction: The advent of artificial intelligence has ushered in a new era of possibilities, but it is not without its challenges. […]