mstdn.science is one of the many independent Mastodon servers you can use to participate in the fediverse.
http://mstdn.science is a place for scientists and science enthusiasts to discuss research and topics surrounding our work.

Server stats:

563
active users

UKRIO

To regulate the use of AI, Ivana Kunda says new laws and regulations will lag behind. We can instead apply old principles to the new technology. 🧵

AI 'intelligence' in large language models is very different to human intelligence. It shuffles words around to create new sentences by guesswork, without knowledge of the meaning (a 'stochastic parrot'). It may sound sophisticated, but be totally wrong or invented.

This is problematic for scientific research and writing, because uncritical use is dangerous. Creates fake science, from which there may be no return. Human faking already exists, but at a less scale.

The black box problem is that we can't explain exactly how LLMs work because they use billions of parameters, nor can we reproduce outputs.

LLMs may be biased due to biased training materials, but inexperienced users may not be aware of these biases, errors, or 'hallucinations'. The output may only present one view, but science needs to assess multiple views and varied information.

We don't know the source of the concepts outputted by LLMs, but in academia we need to attribute ideas and use citations.

UNESCO, EU, etc., say a human perspective must be central to the regulation of AI. Tools do not have personalities or personhood. Few countries have AI regulation or laws; the EU has been held up by lobbying.

UNESCO has just released guidance on AI in education and research: unesco.org/en/articles/guidanc

www.unesco.orgGuidance for generative AI in education and researchUNESCO’s first global guidance on GenAI in education aims to support countries to implement immediate actions, plan long-term policies and develop human capacity to ensure a human-centred vision of these new technologies.

What to do about AI in academia?

1. Nothing. Industry will welcome this.

2. Ban. Academia has been shocked and the use of tools goes against the principle that the work must be yours. Regard use as equivalent to plagiarism.

3. Moderate. Allowed if the instructor permits it. Bans don't handle the issues or educate about risks of using AI tools. Can we instead persuade users why these tools won't help them learn or produce good content?