Technology

304 readers
190 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
1
2
3
4
 
 

our results indicate that hate speech communities on Reddit share speech patterns with Reddit communities for Schizoid Personality Disorder, three Cluster B personality disorders (Borderline, Narcissistic, and Antisocial), and Complex Post-Traumatic Stress Disorder. While the Cluster B disorders would be expected due to prior studies into the Dark Triad and hate speech, this nonetheless offers confirmation that they are acting similarly to their Dark Triad counterparts when it comes to hate speech. Furthermore, the two non-Cluster B disorders have not been discussed in hate speech or misinformation literature to the best of our knowledge, and offer new routes of investigation.

The association in speech patterns between certain psychiatric disorders and hate speech (and misinformation to a lesser extent) suggests that, despite hate speech and misinformation being a social phenomenon, framing and approaching hate speech/misinformation as though they were psychiatric disorders may prove beneficial in combating them. For example, designing counter-messaging against these issues could utilize elements of therapy used for the psychiatric disorders that are most similar to hate speech and misinformation. Although hate speech embeddings being classified most frequently as Cluster B personality disorders could be seen as problematic due to those disorders’ historical resistance to treatment, the successes in treating Borderline Personality Disorder through Dialectical Behavior Therapy indicates that treatment of these disorders is not impossible and could be applied here [59,60]. Furthermore, although hate speech shares similarities with Cluster B personality disorders, that does not mean that it actually is a Cluster B personality disorder; it is most likely less resistant to change than a personality disorder is.

Misinformation’s relations to psychiatric disorder communities proved to remain elusive; a clear connection can be seen in the TDA mapping, and zero-shot classification had over 25% of the misinformation embeddings assigned to psychiatric disorders, but it was difficult to elucidate a clear pattern with the dataset at hand. A more robust dataset may be able to identify clearer trends.

5
6
7
8
9
 
 

Original article from Hypertext, republished under Creative Commons Attribution-NonCommercial-ShareAlike 4.

  • Tea, a dating safety app for women, is the subject of an incredibly alarming data breach.
  • Tens of thousands of images submitted by users, including selfies, have been ripped from a Tea server and were posted to 4chan before being removed.
  • Despite claiming that the breach only affected users who registered before February 2024, it has now come to light that hackers could read DMs between users as recently as a few weeks ago.

No business wants to shout from the rooftops that it has been breached and that the data their users entrusted them with may be circulating on the internet. It’s bad for public relations and destroys trust. However, just because it feels bad doesn’t mean that the custodians of this data can just sweep a breach under the rug.

Case in point is Tea. Tea is a dating safety app where women can share information about their previous partners in a bid to help other women who may encounter these men in the wild. Tea takes the Facebook groups and cobbled-together websites of old and puts a modern, more easily accessible twist on the practice.

Last week, however, the platform was the subject of a breach.

“We discovered unauthorised access to an archived data system,” Tea wrote in a post on its Instagram page.

“This archived system stored about 72 000 user-submitted images including approximately 13 000 images of selfies and selfies including photo identification submitted during account verification. These photos can in now way be linked to posts within Tea,” the developer wrote.

The company claimed that users who signed up for Tea after February 2024 were safe and that no email addresses or phone number were compromised. However, that’s ignoring the thousands of users who now have their data exposed. Worse still, that data system Tea mentions was posted to 4chan before it was eventually removed.

While photos can’t be linked to accounts, that’s besides the point because even just having one’s ID photo in the data dump could be incredibly dangerous for women.

And to make matters worse, somehow there has been a second incident.

As reported by [404 Media, a security researcher has discovered that it was possible for hackers to access messages between users as recently as last week. This flies in the face of Tea’s statement that no current user data is in danger. As the publication puts it, “it was trivial for 404 Media to find the real world identities of some users given the nature of their messages.”

All this while Tea continues to downplay how serious this is for its users.

Even the developer’s reasoning for why the data was breached is weak as it gets.

“During our early stages of development some legacy content was not migrated into our new fortified system. An unauthorized actor accessed our identifier link where data was stored before February 24, 2024. As we grew our community, we migrated to a more robust and secure solution which has rendered that any new users from February 2024 until now were not connected to the images involved in this incident,” Tea writes in an FAQ.

Excuse us, but what? There was an unsecured database just left somewhere in its system since last year, and Tea did nothing about it. That doesn’t sound like “dating safety tools that protect women” as the app proclaims on its website.

This should be grounds for a business-ending fine because, for the users, there is frankly nothing they can do. Their photos, possibly their messages, and more are now compromise,d and while the database containing that info was removed from 4chan, it could now be just about anywhere.

However, Tea’s social media posts about this breach are awash with users who are begging for Tea to accept their application to join the platform. One user even told the platform, “we don’t care about the leak” which is mighty concerning. There are some who are calling for Tea to rebuild and return with a safer app for the users, but the most vocal commenters simply want access.

What’s next for Tea? We honestly don’t know. A breach like this should be the end for a company, but it seems that Tea’s popularity has outweighed the danger of this incident and will likely grow as time marches on because, despite its security failings, there is a demand for this sort of thing.

10
11
12
13
 
 

They never knew they were being filmed — on subway trains, in mall fitting rooms, on university campuses, at home.

Since late June, a Chinese-language Telegram group chat named “MaskPark Treehole Forum,” reportedly with over 103,000 members, has sparked outrage on Chinese social media for circulating obscene covert footage.

Secret intimate recordings of women and individuals having sex were captured using hidden cameras disguised as screws, power sockets, and even bottles of toilet cleaner — and those sharing them could be a colleague, a classmate, or even a family member.

The revelations triggered widespread outrage on Chinese social media, drawing broad coverage from domestic news outlets.

State-backed outlet Guangming Daily called the case “exceptionally egregious” and urged swift regulatory action in a commentary, saying: “Regulators must move faster to fill the gaps, and law enforcement mechanisms need to be strengthened. Only by doing so can we enhance the overall sense of security, free women from the fear of being watched, and make the boundaries of privacy truly inviolable.”

14
15
16
17
 
 

Source.

Long Response

I would like to thank all those who signed the petition. It is right that the regulatory regime for in scope online services takes a proportionate approach, balancing the protection of users from online harm with the ability for low-risk services to operate effectively and provide benefits to users.

The Government has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.

Proportionality is a core principle of the Act and is in-built into its duties. As regulator for the online safety regime, Ofcom must consider the size and risk level of different types and kinds of services when recommending steps providers can take to comply with requirements. Duties in the Communications Act 2003 require Ofcom to act with proportionality and target action only where it is needed.

Some duties apply to all user-to-user and search services in scope of the Act. This includes risk assessments, including determining if children are likely to access the service and, if so, assessing the risks of harm to children. While many services carry low risks of harm, the risk assessment duties are key to ensuring that risky services of all sizes do not slip through the net of regulation. For example, the Government is very concerned about small platforms that host harmful content, such as forums dedicated to encouraging suicide or self-harm. Exempting small services from the Act would mean that services like these forums would not be subject to the Act’s enforcement powers. Even forums that might seem harmless carry potential risks, such as where adults come into contact with child users.

Once providers have carried out their duties to conduct risk assessments, they must protect the users of their service from the identified risks of harm. Ofcom’s illegal content Codes of Practice set out recommended measures to help providers comply with these obligations, measures that are tailored in relation to both size and risk. If a provider’s risk assessment accurately determines that the risks faced by users are low across all harms, Ofcom’s Codes specify that they only need some basic measures, including:

  • easy-to-find, understandable terms and conditions;
  • a complaints tool that allows users to report illegal material when they see it, backed up by a process to deal with those complaints;
  • the ability to review content and take it down if it is illegal (or breaches their terms of service);
  • a specific individual responsible for compliance, who Ofcom can contact if needed.

Where a children's access assessment indicates a platform is likely to be accessed by children, a subsequent risk assessment must be conducted to identify measures for mitigating risks. Like the Codes of Practice on illegal content, Ofcom’s recently issued child safety Codes also tailor recommendations based on risk level. For example, highly effective age assurance is recommended for services likely accessed by children that do not already prohibit and remove harmful content such as pornography and suicide promotion. Providers of services likely to be accessed by UK children were required to complete their assessment, which Ofcom may request, by 24 July.

On 8 July, Ofcom’s CEO wrote to the Secretary of State for Science, Innovation and Technology noting Ofcom’s responsibility for regulating a wide range of highly diverse services, including those run by businesses, but also charities, community and voluntary groups, individuals, and many services that have not been regulated before.

The letter notes that the Act’s aim is not to penalise small, low-risk services trying to comply in good faith. Ofcom – and the Government – recognise that many small services are dynamic small businesses supporting innovation and offer significant value to their communities. Ofcom will take a sensible approach to enforcement with smaller services that present low risk to UK users, only taking action where it is proportionate and appropriate, and will focus on cases where the risk and impact of harm is highest.

Ofcom has developed an extensive programme of work designed to support a smoother journey to compliance, particularly for smaller firms. This has been underpinned by interviews, workshops and research with a diverse range of online services to ensure the tools meet the needs of different types of services. Ofcom’s letter notes its ‘guide for services’ guidance and tools hub, and its participation in events run by other organisations and networks including those for people running small services, as well as its commitment to review and improve materials and tools to help support services to create a safer life online.

The Government will continue to work with Ofcom towards the full implementation of the Online Safety Act 2023, including monitoring proportionate implementation.

Department for Science, Innovation and Technology

18
19
20
21
22
43
submitted 1 day ago* (last edited 1 day ago) by Pro@programming.dev to c/Technology@programming.dev
 
 
23
 
 
24
25
 
 

Move marks latest attempt by Trump Administration to collect unrelated, protected data to fuel mass deportation machine

view more: next ›