Hello World, coming up on 20 years but I think this is my first thread so be gentle : )
I work in a world where LLMs and what they are trained on is important and had this reference shared by a lead developer: https://arxiv.org/abs/2306.13141
Some still closer to the metal (I lost my developer fastball years ago) may be better able to parse, but what I've extracted is that the broader of a sample you draw from (publicly available data) the more biased/hateful your AI becomes.
This was interesting enough on its own (to me?) that I thought some of you would be interested, but I was also curious if anyone could help suss out the why. Maybe its an 'organic' process and people are just ass-hats? Maybe people are mezza mezza but become monsters when on the internet (leader in clubhouse : ). Its also got me thinking that perhaps hate groups are organized in application of AI on the internet for hate content?
I work in a world where LLMs and what they are trained on is important and had this reference shared by a lead developer: https://arxiv.org/abs/2306.13141
Some still closer to the metal (I lost my developer fastball years ago) may be better able to parse, but what I've extracted is that the broader of a sample you draw from (publicly available data) the more biased/hateful your AI becomes.
This was interesting enough on its own (to me?) that I thought some of you would be interested, but I was also curious if anyone could help suss out the why. Maybe its an 'organic' process and people are just ass-hats? Maybe people are mezza mezza but become monsters when on the internet (leader in clubhouse : ). Its also got me thinking that perhaps hate groups are organized in application of AI on the internet for hate content?