Suivre

For years YouTube's video-recommending algorithm has stood accused of fuelling a grab-bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory. New research published today by Mozilla backs that notion up, suggesting 's ( (Alphabet Inc.) ) artificial intelligence ( AI ) continues to puff up piles of bottom-feeding/low grade/divisive/disinforming content - stuff that tries to grab eyeballs by triggering people's sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation - which in turn implies that YouTube's problem with recommending terrible stuff is indeed systemic ; a side-effect of the platform's rapacious appetite to harvest views to serve his advertising. In today's world, « Artificial Intelligence controls what the world is watching. » ( algotransparency.org )

techcrunch.com/2021/07/07/yout

Inscrivez-vous pour prendre part à la conversation
nanao

Comme le soleil, les machines ne se couchent jamais.