- Internet: Medium For Communication, Medium For Narrative Control
- Part 5 — Adapting
- Section 3 — Technical Solutions, Wars and Patches
Table Of Content
- Introduction
- Part 1: The Artifacts And Spaces
In this part we'll describe the important artifacts and places. Going over these essential, but basic, pieces is mandatory to understand how they come into play as tools. - Part 2: The Actors and Incentives
In this part we'll go over how the previous elements are put into work by the different actors, who these actors are, what are their incentives, and the new dynamics. - Part 3: Biases & Self
In this part we'll try to understand why we are prone to manipulation, why they work so effectively or not on us, and who is subject to them. - Part 4: The Big Picture
In this part we'll put forward the reasons why we should care about what is happening in the online sphere. Why it's important to pay attention to it and the effects it could have at the scale of societies, and individuals. This part will attempt to give the bigger picture of the situation. - Part 5: Adapting
In this concluding part we'll go over the multiple solutions that have been proposed or tried to counter the negative aspects of the internet. - Conclusion & Bibliography
- Detection and Remedial, War of Algorithms
- Changing the Way the Platform Work, Communication and Filters
- Differential Privacy and Privacy First Internet
- Defensive Tools
- Attention Management
- Decentralization, Transparency, and Free Software
When free market and regulations fail, when the laws of rights can’t properly protect anyone and trust has eroded, we’re left only with ourselves. In that scenario, tech is seen as the savior of the internet, the weapon and armor of choice for everyone, building and selecting software that resolve issues.
For social media platforms and other big entities such as governments, algorithms can be used for detection, categorization, and remedial. The first step of an issue is to know: without knowledge it’s hard to defend.
Platforms could employ algorithms that would automatically flag posts
containing hate-speech or other sensitive content, and either remove
them or let a person take further actions from there. There could also
be automatic detection systems that would find bot accounts, satiric
content (including deep fake), and state-linked accounts, then label them as such. As we said
earlier, if there are laws to tag such accounts, it makes their activities
transparent and reduces the chances of black propaganda.
Some even talk of contextualization engines that could bring up the context
behind any piece of media posted.
Facebook has recently been experimenting with labeling bot accounts,
adding an official stamp on state related and celebrity accounts, and
labeling satiric and other types of posts as such. Other platforms are
applying comparable labeling, at least for official accounts.
Algorithms could then be countered with algorithms. After detecting malicious intents some actors — be it the platforms to protect their own policies, or states entities — could launch offensive “pro-social” bots themselves, or other means of automatic counter-propaganda as a defense.
However, algorithmic solutions are problematic because they create an arms race. Moreover, the over-reliance on algorithms can have consequential results as they can be reflective of inner-biases. We’ve seen that before.
Another technical solution, in between internet platforms and their
users, could be to change criteria that make the internet what it
is: speed, exposure, and long-lasting.
There has been some arguable success with platforms that make messages
ephemeral, for example. However, these are hard to control because these
are innate attributes of the medium.
Rethinking the current way we communicate online and providing means
to better communicate without toxicity, filter bubbles, addiction,
and extremism is what a lot of people are thinking about today. It’s a
question that remains unanswered.
Some have attempted to create platforms around, not only communicating,
but also organizing ideas, increasing the efforts to partake in the
activities, adding a positive creative gatekeeping.
Some have played with new recommendation algorithms that would increase
cognitive diversity, pushing people out of filter bubbles by exposing
them to different ideas, cultures, geographies, and anything unfamiliar.
Others are trying to build systems that would be able to put metrics
on our biases, on our tendencies to associate with similar people, of
clustering. Essentially, creating a self-monitoring to add serendipity,
inclusiveness, and diversity in our own lives.
Some have tried a topic-based internet where people are randomly connected
based on common interests, instead of vote metrics and shock-value.
Another way is to not put all the facets of our lives in the same place,
to distinguish between different activities, hobbies, and interests,
to keep them separate. This helps avoid social cooling and facades.
Others are trying to see if the micro-transactions and micro-payments
we discussed in a previous section, would work to make an internet of
the passion economy and drive away other incentives, leaving only pure
human creativity and interests.
Technologically, internet giants are trying to win back the trust of
the netizens by talking of differential privacy and privacy first tech.
Like we said before, this is similar to privacy as a product, privacy
as market value. However, big techs are selling these words not only
for the users but to protect themselves from the law.
Some of these companies are now disabling third-party cookies by
default in their products, notifying users of tracking, providing privacy
sandboxes, enabling end-to-end and other types of encryptions by default,
using differential privacy or cohort-based analysis or context-based
ads instead of micro-targeting.
Yet, these all make no sense without transparency and when these entities
are for-profit.
Digital citizens still don’t trust them and would rather rely on defensive
tools and products to feel safer online and avoid being tracked.
They use tools such as VPN, which we discussed earlier, proxies, and
ad blockers. According to statistics 47% of USA internet users now utilize
ad blocking software.
Additionally, many are now using attention management and informational
management tools to get back the control over their attention. We’ve
seen earlier how the internet can affect us long-term cognitively and
how we are inundated with information.
Attention management tools are software that are used to warn people
when they get inadvertently absorbed into activities or media. To be
proactive instead of reactive.
Informational management tools are database systems used to organize
in a clear and concise way the information we find important. They help
to deliberately decide what should be in our memory extension, which we
discussed when we saw the cognitive changes the internet brings.
One great thing about the internet, is that even though it’s convenient,
people don’t have to use pre-existing platforms, they can be empowered
to create their own.
The edification of standards that allow for decentralization and keeping
platforms open are good technological ways to avoid bait-and-switch
tactics, data leaks, privacy concerns, loss of access, being locked out
of accounts, etc..
Avoiding monoliths is avoiding putting all eggs in one basket. Users
can choose to spread their internet usage across different services and
favor decentralized ones.
On top of this, if netizens have enough patience, expertise, and time,
one of the best solution is to own the data and tools by hosting them,
self-hosting. This is more than having backup copies of the digital
assets, it’s also about regaining control, trust, and privacy.
This is what some popular YouTubers and others are doing by building
their own site, to not be victims of the market and keep platforms
decentralized and open.
Sometimes openness isn’t enough, we can host services ourselves but if the software is proprietary then we might still not trust it. What is needed in that case is transparency.
Transparency can be achieved in different ways, self-sovereign identity
is one that we’ve seen in the previous section.
Another way is to use so-called “zero-data” applications, software that
let us be in control of our data from the start, doesn’t access it,
or doesn’t do any type of tracking.
Users can rely on feedbacks and recommendations from non-profit
organizations that try to defend digital privacy such as the Electronic
Frontier Foundation (EFF) to be up-to-date with the best practices and
events in the online sphere. We’ll tackle the education part in the
next section.
Yet, that can be limiting and not transparent enough. The most transparency we can have from software is when it is open source and when the licenses enforce the respect of freedom and liberty, what we call free software.
Certain non-profit organizations and projects have as mission to promote
such kind of user freedom, namely the Free Software Foundation (FSF)
and the GNU Project.
Technically savvy netizens could still rely on their own instinct and
replace their tools and services with the open sources projects they
deem deserve more trust.
Open source and free software licenses can enhance and create value
for the public sector too. They can be used within the ICT framework in
infrastructure and services offered by the institutions, all publicly
funded developments.
Through the use of free software, the citizens and government will
feel more in control over information technology. It would grant them
digital independence, sovereignty, and security ensuring that citizen’s
data is handled in a trustworthy manner.
The use of free formats and protocol will also influence the way
development is done, increase trust and reduce distance between government
software and the citizen involvement. Open source is collaborative and
encourages collaboration by nature.
Having everything done in the open, open access (OA), would also reduce
waste, avoiding non-replaceable software, and offering technological
neutrality. This reminds us of the current trendy discussion rotating
around the subject of electronic right to repair.
These could be applied to any government services, especially if they involve social media as a utility and digital identity. The possibilities are interesting.
Practically, this can be implemented at the state or institution
level either through the legal system, regulations, policies, or
encouragements or promotion (or non-encouragement).
Multiple nation-wide entities and bodies are transitioning to open source
or free-software solutions through different measures. For example:
- In 2002, the Peruvian government voted the adoption of open source across all its bodies to ensure the pillars of democracy were safeguarded.
- In 2004, the Venezuelan government passed a law, decree 3390, that would also transition the public agencies to use open source software.
- The National Resource Centre for Free and Open Source Software (NRCFOSS) in India since 2005 is promoting the use and development of free software.
- The Malaysian Public Sector Open Source Software Program in Malaysia since 2008 is similarly discouraging the use of proprietary-software and encouraging free software.
- In the same year, 2008, Ecuador passed a law, decree 1014, to migrate the public sector to Libre software.
- New-Zealand, in 2010, through its open access initiative, NZGOAL, started promoting the use of free and open source licenses as a guidance for releasing publicly funded software.
- In the same spirit, the UK Government Digital Service formed in 2011 after a proposal from 2010, included in its guideline the promotion of open source when it fits the government IT strategy.
…And countless other examples that show how government entities are promoting or using open source and free software to bring back trust and transparency. As we said earlier, this is a must during the trust and truth crises we are facing.
This concludes our review of technical and software solutions that can be used to avoid issues we’ve seen in this series such as filter bubbles, toxicity, truth and trust crises, attention and information management, and others. In a first time, we’ve seen how big entities can use algorithms to fight online content through detection and remedial, either removal or tagging/labeling of content. However, we’ve also seen that this would give rise to an arms race of bots. Then we’ve looked at ways internet platforms could change their approach to make communication less toxic and increase cognitive diversity. Next, we’ve mentioned how internet giants are patching themselves by introducing more privacy and security features in their products. After that, we’ve said netizens would still lack trust because of the lack of transparency, thus would rely on defensive privacy tools such as VPN, proxies, and ad blockers. Users could also use software tools to help them manage their attention and information. Lastly, we’ve explored finding transparency and trust by decentralizing services and the usage of free and open source software. This can also be applied at the national level to tackle the trust issue with governments.
Table Of Content
- Introduction
- Part 1: The Artifacts And Spaces
In this part we'll describe the important artifacts and places. Going over these essential, but basic, pieces is mandatory to understand how they come into play as tools. - Part 2: The Actors and Incentives
In this part we'll go over how the previous elements are put into work by the different actors, who these actors are, what are their incentives, and the new dynamics. - Part 3: Biases & Self
In this part we'll try to understand why we are prone to manipulation, why they work so effectively or not on us, and who is subject to them. - Part 4: The Big Picture
In this part we'll put forward the reasons why we should care about what is happening in the online sphere. Why it's important to pay attention to it and the effects it could have at the scale of societies, and individuals. This part will attempt to give the bigger picture of the situation. - Part 5: Adapting
In this concluding part we'll go over the multiple solutions that have been proposed or tried to counter the negative aspects of the internet. - Conclusion & Bibliography
References
- Counterpropaganda
- Zero Data App
- Countering Disinformation: Russia’s Infowar in Ukraine
- THE BIOLOGY OF DISINFORMATION memes, media viruses, and cultural inoculation - IFTF
- STATE-SPONSORED TROLLING How Governments Are Deploying Disinformation as Part of Broader Digital Harassment Campaigns
- How memes are becoming the new frontier of information warfare
- Propaganda, Disinformation, & Other Influence Efforts: The Modern Information Environment
- Information overload - Wikipedia
- Attention management (Wikipedia)
- Whatsapp and the domestication of users
- ON THE NATURE OF THE INTERNET — Leslie Daigle
- Why popular YouTubers are building their own sites
- Google Is Testing Its Controversial New Ad Targeting Tech in Millions of Browsers. Here’s What We Know
- Why You Should Use It to Protect Your Privacy, Forbes
- The Whole Web Pays For Google And Facebook To Be Free
- Charting a course towards a more privacy-first web
- Screw it, I’ll host it myself
- The Internet Way of Networking: Defining the critical properties of the Internet
- Measures Governments Can Use to Promote Free Software
- Adoption of free and open-source software by public institutions
- DoD Open Source Software (OSS) FAQ
- Open Source for Government
- 4 myths about open source in government
- GNU
- FSF - The Free Software Foundation (FSF)
- EFF - Electronic Frontier Foundation
- Glue Comic
- Flarum Forums made simple
- The Future of Group Messaging
- Public Money, Public Code
- Open access
- Keeping Track of Your Things
- Open access in New Zealand
- Facebook hopes tiny labels on posts will stop users confusing satire with reality
- Facebook test adds contextual ‘labels’ to popular pages
- The Hitchhiker’s Guide to Online Anonymity
- NRCFOSS
- NZGOAL
- Ethical anti-design, or designing products that people can’t get addicted to.
- UK Government Service Design Manual
Attributions: J. Kepler, Mysterium Cosmographicum, 1660
If you want to have a more in depth discussion I'm always available by email or irc.
We can discuss and argue about what you like and dislike, about new ideas to consider, opinions, etc..
If you don't feel like "having a discussion" or are intimidated by emails
then you can simply say something small in the comment sections below
and/or share it with your friends.