You know what? The Anti-5G USB Stick Is a Scam A lot of bullshit has been circulating about 5G lately, specifically as it relates to the novel coronavirus. To be very clear: There is no evidence that the rollout of 5G is at all connected to the origin or spread of covid-19. But that hasn’t stopped dozens of attacks on cell towers, the proliferation of cockamamie conspiracy theories, and, of course, hucksters peddling bogus anti-5G products to the scared masses. The 5G BioShield USB Key is the latest dumb device to make headlines. On its website, it’s described as creating a “wearable holographic nano-layer catalyzer” that can be used for the “balance and harmonization of the harmful effects of imbalanced electric radiation.” Basically, the device creates a holographic bubble that somehow protects you from the scary 5G waves for a bafflingly varied range of 8 to 40 meters. It is, in fact, a regular old 128MB USB stick with vaguely sci-fi words slapped onto it for marketing. Pen Test Partners, a UK-based security firm, wrote a tear-down blog of the device—as you’d expect, the stick’s supposed “quantum holographic catalyzer technology” transmitter was nothing more than a sticker. No other electronic components were found. The most disturbing thing about the USB stick—besides the fact it costs £300, or roughly $350—is that it’s been recommended by the Glastonbury Town Council’s 5G Advisory Committee, which has called for an inquiry into 5G tech, according to the BBC. You can find the recommendation and a link to this bogus device on page 30 of the committee’s final report. The report is also full of spurious 5G claims, saying that birds may fall “out of the sky dead when 5G is on” and that people could get nose bleeds or commit suicide at higher rates. (Snopes has debunked the 5G bird deaths, while health fears over 5G radiation are also unfounded.) Equally disturbing, the BBC found that the founders of BioShield Distribution show that they were previously involved in a dubious business called Immortalis that hocked a dietary supplement featuring a “proprietary procedure that leads to relativistic time dilation and biological quantum entanglement at the DNA level.” Enough said. But the anti-5G USB stick isn’t the only bogus product out there. On Amazon, you’ll find an assortment of 5G underpants. Entering “5G protection” into Amazon’s search bar will net you 9,000 results with products ranging from pills, stickers, phone cases, hats, and crystal bracelets. Querying “5G shield” will get you roughly 1,000 results, including a ridiculous maternity belly band that supposedly protects an unborn fetus from the ‘dangers’ of 5G. Is it surprising that Amazon hasn’t cracked down on these products? No. But it goes without saying that they should. So in light of the charlatans capitalizing on 5G-related coronavirus hoaxes, it bears repeating that 5G frequencies don’t present a greater risk than other types of electromagnetic radiation. These bogus anti-5G products are at best expensive placebos, and according to the New York Times, your skin is actually a pretty good barrier against higher-frequency radio waves, including 5G. So, no, no one is catching covid-19 from 5G cell towers. And those brain worms you got? It’s more likely you picked them up from Twitter.

Related Feeds

Yup, Facebook and Mark are evil. Facebook reportedly ignored its own research showing algorithms divided users An internal Facebook report presented to executives in 2018 found that the company was well aware that its product, specifically its recommendation engine, stoked divisiveness and polarization, according to a new report from The Wall Street Journal. Yet, despite warnings about the effect this could have on society, Facebook leadership ignored the findings and has largely tried to absolve itself of responsibility with regard to partisan divides and other forms of polarization it directly contributed to, the report states. The reason? Changes might disproportionately affect conservatives and might hurt engagement, the report says. “Our algorithms exploit the human brain’s attraction to divisiveness,” one slide from the presentation read. The group found that if this core element of its recommendation engine were left unchecked, it would continue to serve Facebook users “more and more divisive content in an effort to gain user attention & increase time on the platform.” A separate internal report, crafted in 2016, said 64 percent of people who joined an extremist group on Facebook only did so because the company’s algorithm recommended it to them, the WSJ reports. FACEBOOK FOUND THAT ITS ALGORITHMS WERE PUSHING PEOPLE TO JOIN EXTREMEST ORGANIZATIONS Leading the effort to downplay these concerns and shift Facebook’s focus away from polarization has been Joel Kaplan, Facebook’s vice president of global public policy and former chief of staff under President George W. Bush. Kaplan is a controversial figure in part due to his staunch right-wing politics — he supported Supreme Court Justice Brett Kavanaugh throughout his nomination — and his apparent ability to sway CEO Mark Zuckerberg on important policy matters. Kaplan has taken on a larger role within Facebook since the 2016 election, and critics say his approach to policy and moderation is designed to appease conservatives and stave off accusations of bias. Kaplan, for instance, is believed to be partly responsible for Facebook’s controversial political ad policy, in which the company said it would not regulate misinformation put forth in campaign ads by fact-checking them. He’s also influenced Facebook’s more hands-off approach to speech and moderation over the last few years by arguing the company doesn’t want to seem biased against conservatives. The Wall Street Journal says Kaplan was instrumental in weakening or entirely killing proposals to change the platform to promote social good and reduce the influence of so-called “super-sharers,” who tended to be aggressively partisan and, in some cases, so hyper-engaged that they might be paid to use Facebook or might be a bot. Yet, Kaplan pushed back against some of the proposed changes — many of which were crafted by News Feed integrity lead Carlos Gomez Uribe — for fear they would disproportionately affect right-wing pages, politicians, and other parts of the user base that drove up engagement. One notable project Kaplan undermined was called Common Ground, which sought to promote politically neutral content on the platform that might bring people together around shared interests like hobbies. But the team building it said it might require Facebook take a “moral stance” in some cases by choosing not to promote certain types of polarizing content and that the effort could harm overall engagement over time, the WSJ reports. The team has since been disbanded. In a statement, a Facebook spokesperson tells The Verge, “We’ve learned a lot since 2016 and are not the same company today. We’ve built a robust integrity team, strengthened our policies and practices to limit harmful content, and used research to understand our platform’s impact on society so we continue to improve. Just this past February we announced $2M in funding to support independent research proposals on polarization.”

Still using Zoom? Zoom won’t encrypt free calls because it wants to comply with law enforcement If you’re a free Zoom user, and waiting for the company to roll out end-to-end encryption for better protection of your calls, you’re out of luck. Free calls won’t be encrypted, and law enforcement will be able to access your information in case of ‘misuse’ of the platform. Zoom CEO Eric Yuan today said that the video conferencing app’s upcoming end-to-end encryption feature will be available to only paid users. After announcing the company’s financial results for Q1 2020, Yuan said the firm wants to keep this feature away from free users to work with law enforcement in case of the app’s misuse: "Free users, for sure, we don’t want to give that [end-to-end encryption]. Because we also want to work it together with FBI and local law enforcement, in case some people use Zoom for bad purpose." In the past, platforms with end-to-end encryption, such as WhatsApp, have faced heavy scrutiny in many countries because they were unable to trace the origins of problematic and misleading messages. Zoom likey wants to avoid being in such a position, and wants to comply with local laws to keep operating across the globe. Alex Stamos, working as a security consultant with Zoom, said it wants to catch repeat offenders for hate speech or child exploitative content by not offering end-to-end encryption t0 free users. In March, The Intercept published a report stating that the company doesn’t use end-to-end encryption, despite claiming that on its website and security white paper. Later, Zoom apologized and issued a clarification to specify it didn’t provide the feature at that time. Last month, the company acquired Keybase.io, an encryption-based identity service, to build its end-to-end encryption offering. Yuan said today that the company got a lot of feedback from users on encryption, and it’s working out on executing it. However, he didn’t specify a release date for the feature. According to the Q1 2020 results, the company grew 169% year-on-year in terms of revenue. Zoom has more than 300 million daily participants attending meetings through the platform.

Just stop using FB's WhatsApp, if you care about privacy Signal announces new face-blurring tool for Android and iOS Encrypted messaging app Signal has announced a new face-blurring tool that will be incorporated into the latest Android and iOS versions of the software. Users sharing pictures through the app will be able to quickly blur faces, adding another layer of privacy to pictures, though not necessarily hiding the subject’s identity completely. In a blog post announcing the update, Signal co-founder Moxie Marlinspike linked the update to the worldwide protests against racism and police violence sparked by the killing of George Floyd by law enforcement. These protests have led to record downloads for Signal, which uses end-to-end encryption to make messages harder to intercept. “We’ve also been working to figure out additional ways we can support everyone in the street right now,” writes Marlinspike. “One immediate thing seems clear: 2020 is a pretty good year to cover your face.” When you take a picture through Signal and select the Blur option in the toolbar, the app will automatically detect any faces it spots in your image. If it misses any, users can simply blur out faces by hand, or blur any other features they want to hide. All processing is done on-device, meaning uncensored images never leave the user’s phone. Although blurring faces in photographs certainly makes pictures more private, it’s by no means a foolproof way of anonymizing images and hiding someone’s identity. Some blurring and pixellation methods can be reversed with the right tools, for example. And anyone seeking to identify someone in a picture can work from other information, such as clothing and tattoos, which can be compared with other, un-blurred images. Even if attendees at a protest, for example, hide the identity of fellow protestors, that doesn’t mean other groups and individuals will do the same. Surveillance cameras, police body cameras, and press photographers are all capturing images. Ultimately, the best way to obscure your identity is to take matters into your own hands and wear a mask.

All Things Tech

A group for the nerds.

All Things Tech

Hinsken

27 Followers

1 Followings

95 feeds

Hinsken

DMC

0 Followers

0 Followings

108 feeds

DMC

India

Maharashtra

Nagpur District

Nagpur Urban Taluka

Nagpur

Bharat Nagar