‘Positive Feedback Loop’ Problem

Last September I deleted my Facebook account, have not gone back, and only missed it a little. That is despite Facebook showing me what it thought I wanted to see, based on my (honest) Likes and other reactions. If my feed was showing me what ’engaged’ me why did I get fed up and leave?

There are several parts to the answer, but an important piece is that Facebook’s algorithm is largely an uncontrolled ‘positive feedback loop’. In this type of situation the things you respond to (even it is only because they were the first things that showed up, at the beginning of your account), are shown to you more frequently. If you continue to respond to them, they become even more frequent, and even if you don’t respond to all of the posts or memes that of that type, you will still be responding to them more frequently than posts or memes that you might actually prefer.

In addition, Facebook’s algorithms seem to only measure whether you reacted to a post or meme, not whether you felt good about it, or that you didn’t necessarily want to keep seeing something you opposed and/or that aggravated you. Even for things one politically ‘Like’s there is no distinction between “I support that position” and “I like thinking or talking about this topic, show me more”.

This situation resulted in my feed containing little of the reason I originally joined (to stay in touch with friends) and being full of political memes and posts that, yes, made me want to react (although I wasn’t, for some time, because I was trying to ‘retrain’ the feed, but ultimately what was needed was a way to ‘reset’ Facebook’s measure of what I was interested in).

Basically the positive feedback loop of things I felt strongly about, but didn’t necessarily enjoy, or want to keep coming back had reached an irreversible point where it kept getting worse, not better. Because of that I quit Facebook with a very negative feeling about Facebook in particular, and social media in general.

There are other ways in which Facebook’s algorithms are flawed and/or counterproductive. While the way Facebook chooses what lands on one’s feed seems to be based purely on having a response to previous items, it seems filtering posts out is a post hoc filter based on who, or what group or page, was considered the author. To be truly effective one would need a stronger damping effect by having the “Don’t show me posts like this” based on the content of the post and rather than limiting this to being by the origin of the meme or post — that is as part of the ’the algorithm’ rather than as a band-aid ‘filter’ after the fact.

Then of course there is problem of encouraging misinformation, disinformation, and conspiratorial thinking through this self-reinforcement / positive feedback loop, again with post hoc corrections being applied.

In the end, I think Facebook’s ‘algorithm’ (and probably other large scale machine learning applications) has some pretty fundamental issues at a theoretical level and that the layers of ad hoc correction just make it harder to recognize and fix the foundational issues.