writing.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
A small, intentional community for poets, authors, and every kind of writer.

Administered by:

Server stats:

336
active users

#RecommenderSystems

0 posts0 participants0 posts today
Continued thread

On top of banning all Recommender Systems on social media, we must change from being powerless users of Tech giant services into having democratic collective control of algorithms on all services we use!

Tech platform co-ops creating apps and services which users can democratically and collectively control 🙌

NEW STUDY OUT IN IC&S

Putting #FilterBubble Effects to the Test

In an experimental survey study with real #news #recommendersystems (#NRS), we find **limited** support for #polarization effects of #algorithms inducing "filter bubble" like information environments.

Data also show how balanced algorithms may promote #depolarization.

doi.org/10.1080/1369118X.2024.

@commodon @communicationscholars #PoliticalCommunication #SocialMedia

The next round of our Algorithmic Accountability Reporting Fellowship is just around the corner! 🚀

Got questions? Join us for a Q&A session with Naiara Bellio from our Journalism team TODAY at 6 pm: algorithmwatch.org/en/apply-fe

In this round, we're diving deep into the political economy of AI, exploring crucial topics like #generativeAI and #recommenderSystems. Our aim? To unravel the AI value chain and its far-reaching impact on society, particularly on specific population groups.

Are you working on #personalization, #recommenderSystems, or #adaptiveInterfaces in the fields of #HCI, #CSCW, and/or #XR?

Consider submitting to our workshop ABIS 2024 – International Workshop on Personalized Human-Computer Interaction and Recommender Systems held at Mensch und Computer 2024! 🚀

Submissions: 23.06.2024 (AoE) NEW!
Notification: Early July 2024
Camera-Ready: 23.07.2024
Workshop day: 01.09.2024 (Karlsruhe, Germany)

Find more information here: fg-abis.gi.de/veranstaltung/ab

fg-abis.gi.deDetail - FG ABIS

#publication : Can a Single Line of Code Change Society? The Systemic Risks of Optimizing Engagement in Recommender Systems on Global Information Flow, Opinion Dynamics and Social Structures

We demonstrate that engagement-maximizing algorithms necessarily lead to increased network toxicity and fragmentation of opinion space.

Everything is calibrated on real data from the #Politoscope

#systemicrisks #DSA #opiniondynamics #twitter #polarization #RecommenderSystems

jasss.org/27/1/9.html

Journal of Artificial Societies and Social SimulationCan a Single Line of Code Change Society? Optimizing Engagement in Recommender Systems Necessarily Entails Systemic Risks for Global Information Flows, Opinion Dynamics and Social Structuresby David Chavalarias, Paul Bouchaud and Maziyar Panahi
Continued thread

4/5
It won't be easy to fix recommender systems. Imagine transforming what evolved to be a ""casino"" into a public space, transforming ""users"" into citizens...Where to start? Panoptykon, ICCL and People vs BigTech investigated their most harmful features & call for change.

Fixing Recommender Systems. From identification of risk factors to
meaningful transparency and mitigation:
panoptykon.org/fixing-rec-sys-

I wonder if anyone has bothered to look into the mathematics of #recommendersystems , because it is really embarrassing that papers with with 0000s of citations end up mapping to some very banal conventional #statistical models.
OTH, the latter have missed tremendous opportunities to entrench themselves (and create job opportunities for statisticians) by forgetting special cases that powered applications before the era of cheap computing that started in the 1980s.
It is #pagerank all over again

Systematic Review of Filter Bubbles in Recommender Systems: Fact or Fallacy
arxiv.org/abs/2307.01221

* filter bubble: phenomenon where individuals are isolated f. diverse opinions or materials resulting in exposure to select content, leading to reinforcement of existing attitudes, beliefs, or conditions

Our review reveals evidence of filter bubbles in recommendation systems, highlighting several biases that contribute to their existence

arXiv.orgFilter Bubbles in Recommender Systems: Fact or Fallacy -- A Systematic ReviewA filter bubble refers to the phenomenon where Internet customization effectively isolates individuals from diverse opinions or materials, resulting in their exposure to only a select set of content. This can lead to the reinforcement of existing attitudes, beliefs, or conditions. In this study, our primary focus is to investigate the impact of filter bubbles in recommender systems. This pioneering research aims to uncover the reasons behind this problem, explore potential solutions, and propose an integrated tool to help users avoid filter bubbles in recommender systems. To achieve this objective, we conduct a systematic literature review on the topic of filter bubbles in recommender systems. The reviewed articles are carefully analyzed and classified, providing valuable insights that inform the development of an integrated approach. Notably, our review reveals evidence of filter bubbles in recommendation systems, highlighting several biases that contribute to their existence. Moreover, we propose mechanisms to mitigate the impact of filter bubbles and demonstrate that incorporating diversity into recommendations can potentially help alleviate this issue. The findings of this timely review will serve as a benchmark for researchers working in interdisciplinary fields such as privacy, artificial intelligence ethics, and recommendation systems. Furthermore, it will open new avenues for future research in related domains, prompting further exploration and advancement in this critical area.

Social media: what happens when AI takes over?

AI is about to make recommender algorithms a whole lot more effective, and potentially more dangerous, but it doesn't have to be that way.

Researchers @alasaarela and @lukethorburn are separately working on recommender algorithms that optimise for trust rather than attention and conflict. [Reg wall]

computing.co.uk/analysis/40744

www.computing.co.ukSocial media: what happens when AI takes over? AI is about to make recommender algorithms a whole lot more effective, and potentially more dangerous, but it doesn't have to be that way, say researchers
Replied in thread

@clive That’s really interesting. (Someone better tell Meta. :)

I would argue though that there *is* an algorithm they might be doing better than other platforms, and that’s in classifying the video content. I could see that having a multiplier effect on any recommendation algorithm.

But I found it interesting reading this right after reading Cory Doctorow’s essay on social quitting; particularly the part about the willingness to throw in wildcards because they aren’t focused on high profile influencers.

What I wonder is if they can afford to continue doing that in the long run, or will the economics lead them down the same path as Instagram, where ads and influencers start to dominate, and the things that make them unique go away?

doctorow.medium.com/social-qui

MediumSocial Quitting - Cory Doctorow - MediumBy Cory Doctorow
Continued thread

I had the pleasure of cofounding the interdisciplinary #CARLA workshop on #concept research: conceptresearch.github.io/CARL
We also published an edited volume on the topic (open access): link.springer.com/book/10.1007

I recently transitioned to an industry position, where I work as machine learning engineer on #RecommenderSystems.

Switching to Mastodon for the obvious reasons (this weird guy, who bought twitter and who is posting and doing weird stuff)... Looking forward to reading about your insights!

CARLACARLAConcepts in Action: Representation, Learning and Application