ALLACCESSSPORTINGNEWS

Screen Shot 2016-02-06 at 10 46 54 AM
Screen Shot 2016-02-06 at 10 46 31 AM

Other Technology News

  • Fri, 15 Jan 2021 15:10:03 +0000

    What if the Web Looked More Like Wikipedia?

    My first encounter with Wikipedia came in the form of an admonishment: a teacher’s warning we shouldn’t trust anything we read on the site, because anybody could write on it and anybody could edit it. Of course, the first thing any teenager does when they’re told not to do something is exactly that thing, and so began a lifelong fascination with Wikipedia. (Much to the chagrin of my grandfather, who, around the same time, had gifted me the entire Encyclopedia Britannica on CD-ROM; I’m fairly confident those disks never spun, just as many sets of the printed version given with the best of intentions were never opened.)

    More intriguing than Wikipedia itself was, and remains, the idea at its core: that the Internet can be a place not just for communication and entertainment, but collaboration and truth-seeking. It has rightfully been hailed many times as the pinnacle achievement of the philosophy of the “open web,” which has many definitions, but to me simply means: you can do almost anything here, together, without corporate influence. Today, it’s the open web’s last stand—Apple, Google and Amazon’s decision to ban right-wing darling social media app Parler from their respective platforms and services was absolutely the right choice, but also made abundantly clear that the days of hoping for a truly open web have long since passed.

    Yet Wikipedia, which celebrates its 20th birthday on Jan. 15, lives on—and it’s not just surviving, but thriving. It’s the fifth most popular website among U.S. internet users, according to web tracking firm Semrush, with more than 15 billion visits every month, underscoring its evolution from distrusted upstart to the most dependable of all places its size on the Internet. Yes, it has its problems—generations of philosophers would scoff at the notion that any single article could possibly represent the entire “truth” of a thing, and it endures constant attacks from vandals attempting to change the historical record. But those assaults are generally sorted out quickly.

    In part, Wikipedia is trusted because it’s open about what former Defense Secretary Donald Rumsfeld famously called “known unknowns.” On Facebook, Twitter or other social media sites, users often present fiction as fact, and uncertainty as confidence. Wikipedia, conversely, is up front about what it doesn’t know. “Our editing community does a phenomenal job being very transparent about what is known and unknown,” Katherine Maher, executive director of the Wikimedia Foundation, Wikipedia’s parent organization, told me in mid-December. “You see that in all breaking news articles, where they’ve got a little tag at the top that says, ‘this event is happening in real time, and some information may be changing rapidly.’ And it’s really a flag, it’s a warning to say ‘we don’t know all the facts here.'” That spirit, says Maher, permeates the site and its design. “You see this when Wikipedia says ‘this content is disputed,’ or ‘this article may not be neutral,’ or how it….presents different sides of controversy, so that the reader themselves has the opportunity to say, ‘now that I’ve reviewed this information, what’s the determination that I want to make?'”

    Wikipedia was already on my mind heading into January after my conversation with Maher. But I really glommed onto it while grappling with last week’s horrific attack on American democracy. The episode can be understood through many valid lenses, including those of racism, anti-Semitism and neo-fascism, given the presence of Confederate battle flags and neo-Nazi regalia among the rioters. But it’s also the most tangible real-world result yet of our current epistemological crisis (following other disturbing episodes, like Pizzagate). Many of the participants in the attempted government coup believe that last year’s presidential election was stolen, thereby justifying their actions—even though that’s a lie ginned up by the President and his allies and repeated verbatim on social media platforms like Facebook, Twitter and YouTube (plus certain cable news channels, of course).

    You’d be hard pressed, however, to find any of that bunk on Wikipedia; what it has are clear-eyed articles discussing the events as historical phenomena. That got me wondering: why does Wikipedia seem to have a general immunity to bullshit? And can we confer that immunity onto other social media platforms, as if receiving a vaccine?

    Wikipedia’s most obvious answer to our crisis of truth is moderation. When the social media platforms began to crack down on President Trump and various conspiracy theories after last week’s attack, they were exercising their immense moderation power in a way they’ve been reluctant to do until now. (That change might have something to do with the fact that the end of Trump’s tenure is just days away). But Wikipedia has had a culture of moderation since day one, with contributors and editors—all volunteers—going back and forth on articles until they agree on some form of truth. (In fact, the more contested a given Wikipedia article is, the more accurate its contents are likely to be, says Maher).

    Social media platforms hire moderators, but their focus is generally on the psychologically traumatizing work of removing illicit content like child pornography, rather than on Wikipedia-style debates sifting fact from fiction. So the work of countering falsehoods on social media falls to journalists, academics and other experts.

    As valiant as those efforts are, they’re doomed to fail. As I spoke with Maher, it became clear just how focused Wikipedia is on giving its moderators not just the power to do their jobs, but the tools, too. That includes long-standing features like footnotes, citations and changelogs—none of which are available to those attempting to correct falsehoods on social media platforms. Additionally, in my conversation Maher, it became clear that Wikipedia’s product roadmap is built around truth. For example, Maher says longtime editors who have earned the community’s trust now have access to technology that can identify and block or reverse attempts to vandalize pages by sophisticated attackers using multiple Internet Protocol (IP) addresses to make it seem like multiple users are agreeing on a given change, thereby faking community consensus.

    “This kind of—we call it P.O.V. pushing—is not just around misinformation, it can also be around whitewashing the articles of politicians, or it can be a state-sponsored initiative in order to do reputation management or political point of view pushing,” says Maher. “So we’ve worked with our communities to think about what tools they need to be able to more rapidly address these sorts of issues. The same tools that we use for this are the tools that we use for spam fighting. They’re the same tools that we use for identifying incidents of harassment.”

    Put another way, Wikipedia can offer the truth because it was (and is still being) built for the truth. It attracts truth-minded volunteers by offering them the tools they need to do their job—a stark comparison to social media sites, where pretty much every new feature (messages! snaps! stories!) are designed to goose engagement, and often make bullshit easier to spread and harder to check. They were built so users could share banal life updates or pictures, their founders never anticipating their products would one day contribute to an attempted subterfuge of American democracy. Their user interfaces aren’t meant for truth, and the business models they have built up around those interfaces—based on hyper-targeted advertising and maximal engagement—can’t possibly accommodate it. You crack down on all the super-engaging bullshit, and your profits go out the window.

    “Unlike social media platforms, [Wikipedia] editors don’t fight for engagement—the incentives that push content to the extremes on other platforms simply don’t exist on Wikipedia,” say Miles McCain and Sean Gallagher, students performing research with the Stanford Internet Observatory who jointly responded to my questions. “After all, Wikipedia has no incentive to maximize engagement: it’s a non-profit, and not ad-supported.” Social media executives, meanwhile, have long held that they mostly shouldn’t be held accountable for the content posted on their platforms, and that policing that content opens a potentially dangerous Pandora’s box. “Having to take these actions fragment the public conversation,” said Twitter CEO Jack Dorsey on Wednesday night, reflecting on his company’s decision to ban Trump. “They divide us. They limit the potential for clarification, redemption, and learning. And sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.”

    As popular as Wikipedia is, its survival is not a given. Like all encyclopedias, it’s dependent on the existence of high-quality primary sources, including journalism—and my industry, for the most part, is not in great shape, especially at the local level. This isn’t just a question of the financial solvency of media outlets. If people don’t trust the sources upon which Wikipedia is based, why should they trust Wikipedia? Maher says the Wikimedia Foundation is cognizant of this, and is working on ways to support the knowledge ecosystem upon which Wikipedia relies.

    “It does no good for people to highly trust Wikipedia, but then not trust the institutions of the free press or to trust academic inquiry or scientific research,” she says. “All of these things need to have an underlying public confidence in order for us to be able to face them and for people to then have confidence in what’s on Wikipedia. So we are looking at this as a broader ecosystem question and saying, ‘where are the ways in which we sit maybe as a point of introduction to knowledge, but then also, what are our obligations for how we think about how to support this?'”

    If there’s a lesson to be learned from Wikipedia’s continued success, it’s this: build people the tools to effectively call out bullshit, and, like baseball-playing ghosts emerging from an Iowa cornfield, they will come. But what works for Wikipedia—an old-school Internet labor of love—is unlikely to work for a major corporate power with quarterly goals to meet and a market to appease. Maybe the answer, then, is for us as users to spend less time on Twitter, and more on Wikipedia.

  • Wed, 13 Jan 2021 22:54:11 +0000

    In These Tumultuous Times, Sea Shanty TikToks Have Suddenly Become a Port in the Storm

    While 2021 has already served up plenty of wretched bombshells, the best surprise of the past two weeks might be TikTok’s biggest new trend: sea shanties. As the world hurtles head-first into 2021, sea shanties are turning into a safe harbor for social media users looking to take their mind off the events of the day with some (very) old-timey entertainment.

    It all started when Scottish musician Nathan Evans recorded a video performing a remarkably catchy rendition of “Wellerman,” a 19th century shanty of New Zealand origin. It blew up online, garnering over four million views on TikTok alone. From there, it wasn’t long before other users were piggybacking on the trend by duetting the whaling tune alongside Evans or debuting sea shanty performances of their own. Thus, #ShantyTok was born.

    Despite the fact that the recent resurgence of sea shanties is owed entirely to the internet, their modern appeal certainly harkens back to their original purpose: synchronizing individual efforts to achieve a common goal.

    In centuries past, the purpose of these call-and-response work songs was, of course, to maintain a ship crew’s focus on safely navigating often dangerous waters. Whether the task was rowing, hoisting sails or hauling nets, the hand-over-hand beat of sea shanties was intended to help sailors keep time with each other.

    But amid a global pandemic that has kept many people at home and isolated for nearly a year, sea shanties can help foster a sense of community during a time when many are feeling lonely. As expertly noted by Vulture‘s Kathryn VanArendonk, at their heart, sea shanties are “unifying, survivalist songs, designed to transform a huge group of people into one collective body, all working together to keep the ship afloat.”

    They’re also undeniable earworms.

    Thanks to TikTok’s duet feature, which allows users to build on fellow TikTokers’ videos with their own additions, Evans’ solo rendition of “Wellerman” has been transformed into a complex split-screen harmony, complete with multiple vocal parts and instrumental accompaniment. It’s a group project that everyone in the video worked on individually, which makes its success all the more impressive.

    Affirming that the traditional shanty format possesses a timeless quality, some people have even begun turning popular songs of today, like Smash Mouth’s “All Star” and Cardi B and Megan Thee Stallion’s “WAP,” into sea shanties in their own right.

    A video of one reluctant sea shanty fan discovering the allure of these nautical ditties has also gone viral. All it took was a car ride and an inspirational brother.

    The 45-second clip perfectly showcases how shanty sing-alongs have a way of drawing others in to the fold, or, as one commenter puts it, “I guess it makes sense when you think [about how] the artform originated as a way to provide pleasure, connection and entertainment during periods of sustained social isolation.”

    So if you’re in need of a distraction amid everything going on right now, #ShantyTok is here for you. Keep calm and shanty on.

  • Tue, 12 Jan 2021 19:23:27 +0000

    Big Tech’s Crackdown on Donald Trump and Parler Won’t Fix the Real Problem With Social Media

    If democracy is a river, or a forest, or a pristine meadow, then social media platforms are a factory spewing toxic pollutants into it. Even if you block the new effluent, the pollution that has already escaped won’t just go away. It needs to be cleaned up.

    That’s the analogy used by Whitney Phillips, one of the world’s leading experts on the rise of the far right online.

    Twitter and Facebook’s ban of President Trump last week, and the deplatforming of the rightwing social network Parler by Apple, Google and Amazon on Monday, are crucial first steps in stemming the flow of pollution, says Phillips, who is an assistant professor at the Syracuse University department of communication and rhetorical studies. But more is still spilling out, and that’s before you even get to the question of how to clean up what’s already escaped.

    Read More: Why Facebook and Twitter Suspended Trump’s Accounts After Capitol Riots

    “The real thing that we have to deal with long term is how these platforms didn’t just allow, but actively incentivize the spreading of this pollution. For years and years and years and years, it was allowed to build up in the environment for so long, such that you now have this enormous percentage of the population that has really internalized so much of this pollution,” she says. “You can take away Parler. But that’s not going to take away the belief in tens of millions of people that the election was stolen.”

    Phillips and others who research extremism on social media say the core algorithms and business models of the biggest social platforms like Facebook, Twitter and Google-owned YouTube are in large part to blame for the series of events which led eventually to a violent insurrectionist mob of President Trump’s supporters storming the seat of American democracy on January 6. Those platforms amplify content that provokes emotional reactions above all else, Phillips says. That means “historically, the platforms have actually done more to benefit and embolden the right” than any other political grouping.

    Trump Supporters Hold "Stop The Steal" Rally In DC Amid Ratification Of Presidential Election NurPhoto via Getty Images—Shay Horse/NurPhotoTrump supporters near the U.S Capitol, on January 06, 2021 in Washington, DC.

    On Monday, Amazon pulled Parler’s web hosting, which was provided through Amazon Web Services (AWS), forcing it offline. Google and Apple each suspended Parler from their app stores over the weekend, citing its lax moderation practices and the danger that violence was being planned on the platform. “We cannot provide services to a customer that is unable to effectively identify and remove content that encourages or incites violence against others,” Amazon said in a letter to Parler. In response, Parler filed a lawsuit against Amazon on Monday.

    The cutting off of Parler came shortly after President Trump himself was banned permanently from Twitter two days after the storming of the Capitol. He was also suspended from Facebook until Joe Biden’s inauguration at the earliest. Justifying Trump’s suspensions, Twitter and Facebook said the President’s continued presence would have increased the risk of violence and potentially undermine the peaceful transition of power. Trump’s YouTube channel remains online, though YouTube deleted a video in which he praised rioters who stormed the Capitol.

    Parler has risen in popularity over the past year, as mainstream social networks like Facebook, Twitter and YouTube have slowly built up their guardrails against misinformation and conspiracy theories like QAnon and election fraud, and banning users who violate their policies the most egregiously. Despite those efforts, those mainstream platforms remain hotbeds of misinformation. Still, the rise of Parler (and now, the movement of many Parler users to the messaging app Telegram) is a sign that even if Facebook, YouTube and Twitter manage to eradicate the pollution from their platforms entirely, it will still exist, swilling around American democracy in the form of radicalized users and exploited by opportunistic politicians and unscrupulous media.

    Read More: White Supremacism Is a Domestic Terror Threat That Will Outlast Trump

    Researchers who study disinformation and the far-right online say that deplatforming can be successful. They point to when the main platforms banned Alex Jones, the founder of conspiracy theory site InfoWars and Milo Yiannopolous, a far right former Breitbart editor, and to the shutting down of Reddit forums catering to incels or the most toxic of Trump supporters, as examples of successfully reducing the number of people such messages can reach. However, researchers point out, deplatforming may do nothing to deradicalize the most devoted users—or reduce the risk of violent attacks. (The FBI says that armed protests are being planned at all 50 state capitols and the U.S. Capitol in the days leading up to, and the day of, Biden’s inauguration.)

    At this late stage, when so many people are already radicalized, the solutions have to be more complex than simply deplatforming people, says Phillips. “I think that they made the right call,” she tells TIME of the platforms’ decisions to deplatform Trump and Parler. But up until this point, she says, they have “continually made the wrong calls, or opaque calls, or inconsistent calls, or calls that ultimately allowed this to happen for so long.”

    The tech companies’ eventual decisions to deplatform Trump have quickly fed into conspiracy theories about Silicon Valley unfairly censoring conservatives, a narrative pushed by Republicans and online conservatives over the past several years. Now, politicians like Trump are galvanizing their supporters with claims they are unfairly having their freedom of speech restricted by a cabal of companies bent on overturning Trump’s supposed election victory.

    Trump Supporters Hold "Stop The Steal" Rally In DC Amid Ratification Of Presidential Election NurPhoto via Getty Images—Shay Horse/NurPhotoTrump supporters take the steps on the east side of the US Capitol building on January 06, 2021 in Washington, DC.
    Experts in the field also remain troubled by the problem of big corporations like Facebook, Google and Amazon having the sole power to decide who can and can’t have an online voice. In Vietnam, Facebook has complied with requests from the authoritarian government to remove accounts of dissidents. In India, it has evaded banning ruling party lawmakers even when they’ve broken its rules. Experts are troubled by the timing of the decision in the U.S.: neither Facebook or Twitter decided to suspend Trump until after the Democrats won control of the Senate on Jan. 6 and Biden was confirmed by lawmakers as the next President. “It is hard to view this decision, and the timing, as anything other than trying to cozy up to power, as opposed to some form of responsible stewardship of our democracy,” said Yael Eisenstat, a former Global Head of Elections Integrity Operations for political advertising at Facebook, in a statement.

    Facebook and Twitter did not immediately respond to TIME requests for comment. In his statement announcing Trump’s suspension, Facebook CEO Mark Zuckerberg said: “Over the last several years, we have allowed President Trump to use our platform consistent with our own rules, at times removing content or labeling his posts when they violate our policies.” This is true, but only because Facebook wrote an exemption into its own rules that allowed posts by public figures like Trump to remain on the platform even if they broke some rules.

    In Russia, where a rightwing autocrat is in power, dissidents viewed Twitter’s decision to ban President Trump with extreme skepticism. “In my opinion, the decision to ban Trump was based on emotions and personal political preferences,” tweeted Russia’s main opposition leader, Alexey Navalny. “Of course, Twitter is a private company, but we have seen many examples in Russian and China of such private companies becoming the state’s best friends and enablers when it comes to censorship.” German Chancellor Angela Merkel also raised concerns about the move’s implications for free speech.

    Read More: How Ashli Babbitt Is Being Turned Into a Far-Right Recruiting Tool

    While the Biden Administration is mulling reform of Section 230, the law that allows platforms legal protection from accountability for what is posted on them, tech policy experts say that it is low on the incoming President’s list of priorities. His tech policy pointman, Bruce Reed, has expressed a desire to reform Section 230 in the past. In December, Senator Mark Warner, a leading Democrat critic of Facebook, told TIME that Biden’s approach could include invoking civil rights laws to bring stricter penalties for people spreading hate speech or racist activity online. But the incoming team is also stacked with former employees of big tech companies, which has left many activists prepared for a fight ahead over the shape of Biden’s tech policy. “Quite frankly, if people in the Biden Administration want to spend their time and energy fighting to help Mark Zuckerberg make more money, then that’s a fight I will take up,” says Rashad Robinson, President of Color of Change, one of the first civil rights groups to call for Trump to be deplatformed back in 2017.

    On Monday, nine days before President Joe Biden is set to be inaugurated, and after years of ignoring calls from civil society to appoint a senior executive with civil rights expertise, Facebook announced that a former Obama Administration official, Roy Austin Jr., would be its first ever vice president for civil rights, with a responsibility of overseeing the company’s accountability for racial discrimination and hatred. He starts the day before Biden’s inauguration.

    It may be inevitable that political pressure will always have some bearing on the way platforms moderate themselves. In this case, the platforms are finally pivoting their enforcement to respond to Democratic pressure—which happens to align somewhat with civil society—after years of largely ignoring those calls under the Trump Administration. But still, experts say, the core problem remains. “The underlying problem here is not the [platform] rules themselves,” writes technology columnist Will Oremus for OneZero, “but the fact that just a few, for-profit entities have such power over global speech and politics in the first place.”

  • Sat, 09 Jan 2021 00:03:27 +0000

    Twitter Permanently Suspends President Donald Trump’s Account

    Two days after the deadly insurrection at the U.S. Capitol, Twitter announced on Friday evening that it was permanently suspending President Donald Trump’s account “due to the risk of further incitement of violence.”

    In a series of tweets on its @TwitterSafety account, the social media giant said that Trump’s account had continued to violate the rules even after being warned by temporarily locking Trump’s account on Wednesday evening after the insurrection that caused the death of at least six people, either at the Capitol or from injuries sustained there.

    “After close review of recent Tweets from the @realDonaldTrump account and the context around them we have permanently suspended the account due to the risk of further incitement of violence,” Twitter said in its announcement. “In the context of horrific events this week, we made it clear on Wednesday that additional violations of the Twitter Rules would potentially result in this very course of action.”

    The move comes a day after Facebook CEO Mark Zuckerberg said that Trump’s Facebook and Instagram accounts would be indefinitely suspended at least through President-elect Joe Biden’s inauguration on Jan. 20, because “the risks of allowing the President to continue to use our service during this period are simply too great.”

    During the chaotic and violent events on Capitol Hill Wednesday, when a pro-Trump mob broke into the U.S. Capitol building to disrupt the certification of the Electoral College vote, Trump tweeted out a short video telling the occupying group to go home and that they were “loved” and “special.”

    In the short video, Trump continued to spread misinformation about the presidential election, saying again that it was rigged and stolen. Shortly afterwards, he tweeted out a similar message in text. “These are the things and events that happen when a sacred landslide election victory is so unceremoniously & viciously stripped away from great patriots who have been badly & unfairly treated for so long,” the President tweeted on Wednesday afternoon. “Go home with love & in peace. Remember this day forever!”

    At first, Twitter reacted by locking down both messages, so that they could not be liked, retweeted or replied to, before ultimately suspending the account for 12 hours. On Friday, after the suspension was lifted, Trump only tweeted twice. The first was to praise those who voted for him, saying that they “will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!” His last tweet, at least from this suspended account, was an announcement that he would not attend Biden’s inauguration.

    From the cryptic “Covfefe” to caps-lock declarations of “WITCH HUNT,” in many ways Twitter has defined his presidency. To his more than 88 million followers, he wielded it like a policy wishlist, a cudgel to publicly scorn his enemies or fire members of his Administration and his preferred way to offer political endorsements.

    Especially over the past year, both Trump’s usage of Twitter and his criticism of the company has increased. During 2020 and in the run up to the election, he would tweet and retweet sometimes 200 tweets a day. In May 2020, Trump tweeted a baseless allegation that the use of mail in ballots in elections would be “substantially fraudulent” and, for the first time, Twitter flagged it as an “unsubstantiated claim.” As the President continued to spread misinformation about elections and the COVID-19 pandemic, Twitter continued to crack down on his tweets, with escalated warnings.

    Trump and other prominent Republicans said they viewed Twitter’s moves as censorship, and have called for the repeal of Section 230, part of an Internet legislative act passed in 1996 which would change the rules by which litigation could be brought against social media companies. Trump’s own interest in repealing this sliver of the law was so great that last month he vetoed a bi-partisan defense spending bill for not including language that would defend him against his perceived censorship. Congress has since overrode his veto.

    Though the President’s use of social media brought initial confusion over whether the statements were to be regarded as official statements from the White House, the weight of his tweets were understood shortly after his inauguration. Trump’s first Press Secretary Sean Spicer said in the spring of 2017 that he considered the President’s tweets to be official statements from the White House. Multiple judges in the first year of Trump’s presidency concurred. The official stature of Trump’s Twitter account was decided by a federal appeals court in 2019, when a unanimous panel declared that the President violated the Constitution by blocking people on the social media site, as it in effect kept U.S. citizens from official public statements.

  • Thu, 07 Jan 2021 12:34:16 +0000

    Facebook and Twitter Finally Locked Donald Trump’s Accounts. Will They Ban Him Permanently?

    Twitter and Facebook imposed their toughest restrictions so far on President Donald Trump on Wednesday evening, after he incited his supporters to storm the U.S. Capitol in Washington in an attempt to overturn his election loss. Both companies temporarily suspended the President from posting on their platforms and removed several of his posts, but stopped short of permanently banning him.

    The account suspensions are the farthest either platform has gone in restricting Trump from broadcasting his message directly to his tens of millions of followers. The moves come after years of calls for social media companies to do more to stop the President from spreading misinformation, conspiracy theories and threats that undermine democracy.

    Twitter required Trump to delete three tweets that the company said violated their rules, and said it would suspend his account from posting for 12 hours after their removal. “If the Tweets are not removed, the account will remain locked,” Twitter said in a statement. The tweets are no longer visible on his profile. The company also said that if Trump violated their rules again, his account would be permanently banned.

    After Twitter acted, Facebook suspended Trump from posting for 24 hours from Wednesday evening, and deleted two posts it said violated its rules. Instagram, owned by Facebook, did the same. Then, on Thursday, Facebook CEO Mark Zuckerberg said in a statement that Facebook would be extending the block on Trump indefinitely, and for at least two weeks until “the peaceful transition of power is complete.”

    Read More: Photographs From Inside the Chaos at the Capitol

    Twitter, Facebook and YouTube also removed a video posted by Trump in which he called on rioters to go home, but doubled down on his false claims that the election was stolen and told rioters he loved them. “This is an emergency situation and we are taking appropriate emergency measures, including removing President Trump’s video,” Facebook’s chief of safety and integrity, Guy Rosen, said in a statement. “We removed it because on balance we believe it contributes to rather than diminishes the risk of ongoing violence.”

    Trump Supporters Hold "Stop The Steal" Rally In DC Amid Ratification Of Presidential Election Jon Cherry/Getty ImagesA pro-Trump mob enter the Capitol Building after breaking into it on January 6, 2021 in Washington, DC.

    Facebook and Twitter have long had rules against inciting violence on their platforms, but throughout Trump’s presidency they have refused to suspend or ban Trump in cases where critics say he has fanned the flames of violence.

    As Black Lives Matter protests spread around the country in late May after the police killing of George Floyd, Twitter prevented most ways of engaging with a tweet by Trump that said “when the looting starts, the shooting starts”—which the company said violated its rules on incitement to violence. But it allowed the tweet to remain accessible behind a warning message, with Twitter saying in a statement it “determined that it may be in the public’s interest for the Tweet to remain accessible.” Facebook, meanwhile, refused to take any action against Trump’s post, prompting some employees to stage a walkout.

    Read More: Trump’s Twitter Has Defined His Presidency. Here’s Why That Won’t Change

    Chief among the platforms’ reasons for not banning Trump at that point were that as President, his words were inherently worthy of public attention, scrutiny and discussion. That appeared to change after Wednesday’s assault on the Capitol. “There have been good arguments for private companies to not silence elected officials, but all those arguments are predicated on the protection of constitutional governance,” said Alex Stamos, Facebook’s former chief of security, in a tweet on Wednesday shortly before Facebook and Twitter temporarily suspended Trump’s accounts. “The last reason to keep Trump’s account up was the possibility that he would try to put the genie back in the bottle but as many expected, that is impossible for him.”

    The events of January 6, 2021, were—to the tech platforms at least—a glaring sign that the risk of violence, and to democracy, was now greater than the necessity to continue giving a sitting President a platform to speak. And for platforms steeped in the very American notion that freedom of speech is core to democracy, it was a belated acknowledgement of a key lesson from history: that sometimes, a democratically-elected leader can intentionally undermine democracy with inflammatory speech and a large platform. Many democracies have caveats to free speech rules to prevent that from happening. Facebook and Twitter have similar rules for most ordinary users, who can be banned for inciting violence. Until now, Trump has got away with little more than a slap on the wrist for using the platforms to do the same.

    Zuckerberg justified waiting until only 13 days were left of Trump’s Presidency to suspend him, saying Wednesday’s events changed things. “Over the last several years, we have allowed President Trump to use our platform consistent with our own rules, at times removing content or labeling his posts when they violate our policies,” he said Thursday. “We did this because we believe that the public has a right to the broadest possible access to political speech, even controversial speech. But the current context is now fundamentally different, involving use of our platform to incite violent insurrection against a democratically elected government.”

    Activists said the bans should just be the start of a tougher approach to big tech regulation by the Biden Administration. “The riots in D.C. yesterday demonstrate very clearly the consequences of disinformation amplified in social media with such incessant frequency that it becomes an alternative reality for those targeted with lies and conspiracy,” said Ben Scott, the executive director of Reset, a group lobbying for stricter regulation of tech platforms, in a statement to TIME. “Regulating these tech platforms should be a major priority for the Biden administration.”

    Even after suspending Trump’s accounts, Facebook and Twitter remain platforms where misinformation circulates almost freely. On Thursday morning, a Facebook group called “Stop the Steal” with more than 14,000 members continued to be accessible on the platform. And while Facebook has designed algorithms that since 2019 (when they work properly) have pointed users in the direction of fact-checks when verified false information is posted by individual users, those protections are still easily circumventable so long as users post screenshots of a piece of misinformation instead of linking directly to it.

    And the relatively short lengths of the suspensions (12 hours from Twitter and 24 from Facebook) show just how reticent the platforms still are to fully ban a sitting President. Still, neither platform has ruled out taking further action against Trump—and the possibility that he might be banned entirely after stepping down as President remains open.

    “This temporary ban doesn’t go far enough,” said Rashad Robinson, president of Color of Change, a group that has long called on social media platforms to ban President Trump. “Ban him permanently. He’s done enough damage. Do not allow him to return in a day to continue to spread dangerous misinfo.”

  • Wed, 06 Jan 2021 22:39:38 +0000

    Facebook Blocks President Trump’s Account ‘Indefinitely’ After He Incited Mob That Stormed Capitol

    Facebook has blocked President Donald Trump from using its service—including Instagram, which is owned by Facebook— for the remainder of his presidency after he incited supporters in Washington, D.C., who turned violent and stormed the U.S. Capitol.

    In a Thursday morning post, Facebook CEO Mark Zuckerberg announced that the company was extending the block it had placed on Trump’s Facebook and Instagram accounts on Wednesday until at least the inauguration of President-elect Joe Biden on Jan. 20—and perhaps longer.

    “We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Zuckerberg wrote. “Therefore, we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks until the peaceful transition of power is complete.”

    The move came after social media platforms removed posts Wednesday by Trump, and blocked him from posting to his tens of millions of followers. At about 7 p.m. ET, Twitter announced it had locked Trump’s Twitter account for 12 hours and removed three tweets, including a video message from the President to his supporters, that the platform said carried a “risk of violence.”

    Facebook announced about 8:30 p.m. that it had locked the president’s account on its platform for 24 hours, preventing him from posting after also removing the Trump video. Instagram, which is owned by Facebook, blocked Trump from posting for 24 hours as well.

    Read more: Incited by the President, Trump Supporters Violently Storm the Capitol

    In the banned video, posted about 4 p.m., Trump spoke directly to his supporters, whom he had urged to gather in Washington D.C. as Congress took up the Electoral College certification that would finalize Joe Biden’s victory over the President. Trump used the video to once again make false claims of election fraud, before encouraging his supporters, who had unlawfully entered the Capitol building, to go home in peace.

    “You have to go home now. We have to have peace. We have to have law and order,” he said. “So go home. We love you. You’re very special…I know how you feel.”

    Later in the evening, Trump again tweeted false information about the election and urged those who stormed the Capitol “go home with love & in peace.” Twitter similarly restricted engagement with that tweet.

    Social media companies have long been under pressure to prevent Trump from posting false and misleading posts on his accounts. Trump has more than 88 million followers on Twitter and 33 million on Facebook.

    Read more: How the World Is Responding to a Pro-Trump Mob Storming the U.S. Capitol

    Twitter has previously flagged numerous tweets from Trump for disputed information, but Wednesday’s action appears to be the first time that Twitter has locked down engagement with one of his posts as well as marked it for “risk of violence.”

    “As a result of the unprecedented and ongoing violent situation in Washington, D.C., we have required the removal of three @realDonaldTrump tweets that were posted earlier today for repeated and severe violations of our Civic Integrity policy,” a Twitter statement said. “This means that the account of @realDonaldTrump will be locked for 12 hours following the removal of these Tweets. If the Tweets are not removed, the account will remain locked. Future violations of the Twitter Rules, including our Civic Integrity or Violent Threats policies, will result in permanent suspension of the @realDonaldTrump account.”

    Twitter also warned that Trump risks a permanent suspension of his account if he violates the platform’s rules in the future.

    Facebook, in a statement posted on Twitter said: “We’ve assessed two policy violations against President Trump’s Page which will result in a 24-hour feature block, meaning he will lose the ability to post on the platform during that time.”

  • Tue, 05 Jan 2021 20:20:41 +0000

    Health Workers Are Going Viral on TikTok for Debunking COVID-19 Myths

    At first glance, the December video looks like just the latest rendition of a TikTok trend. On one side of the split screen “duet,” a video game car bounces down a mountain; on the other, the TikTok user “dr.noc” scrambles to talk as much as he can before the car slams into the ground. But while other videos feature stream-of-consciousness chatter, Dr. Noc’s words are precise. Noc, who in real life is Morgan McSweeney, a PhD scientist who researches treatments for diseases like COVID-19, is trying to debunk as many myths about coronavirus vaccines as he can before the final animated explosion.

    @dr.noc

    #duet with @beamng.crashed Follow for actual facts, not spooky paranoia. #covid19 #coronavirus #covidvaccine #science #drnoc

    ♬ original sound – BeamNG.Drive

    Since the start of the pandemic, misinformation of the sort debunked by McSweeney has mushroomed across TikTok, spreading rapidly thanks to an algorithm that has allowed misleading videos to rack up thousands of views before the app can remove them. Many of these videos are as simple as an individual talking to their video camera about some false fact, but they can take off—perhaps because fiction is (usually) stranger than the truth; a convoluted conspiracy theory involving the government and global billionaires can be far more compelling than the straightforward reality that a vaccine is safe and effective. However, these videos are more insidious than legends about Bigfoot. Misinformation about COVID-19 can discourage people from taking precautions that limit the spread of the virus, such as receiving a vaccine.

    TikTok misinformation is unique in its reach among the very young, who comprise the majority of its user base. Despite the fact that young adults are less likely to get severe COVID-19 disease, stopping the spread of the virus among this demographic is essential to limit the damage done by the pandemic. The U.S. Centers for Disease Control and Prevention has found that outbreaks of the virus among young people seem to drive later outbreaks among older people, who are more likely to become severely ill and die from the disease. But slowing the spread of the virus among the young poses particular challenges. Young people are more likely to work frontline jobs like food service that make social distancing difficult; additionally, people who are 18 to 29 years old are less likely to take precautions known to slow the spread of COVID-19, such as avoiding crowds and maintaining a distance of six feet from other people.

    While the young tend to be more comfortable online than older people, that hasn’t inoculated them against the spread of false claims. In fact, some studies have shown they seem even more prone to believe misinformation about the pandemic: A September survey of more than 21,000 Americans by researchers led by a group from Northeastern University found that adults under 25 had the highest probability of believing a false claim about COVID-19. For instance, 28% of respondents ages 18 to 24 incorrectly believed that the coronavirus passed to humans by eating bats, compared to just 6% of people over 65.

    It seems as if one of the reasons young people often believe misinformation is because they tend to get more of their news on social media. In 2018, 36% of Americans ages 18 to 29 said they often get news on social media, making it the most common news source for that age group, according to Pew Research Center polling. And for many young people in 2021, social media means TikTok; in 2019, about 60% of the 26.5 million active monthly TikTok users were between 16 and 24, Reuters reported.

    Although the company has mounted an effort to cut back on false claims—including taking down 29,000 videos about the virus posted by European users this summer—you don’t have to look far to find misinformation about COVID-19 on the app, from false claims about vaccines to misleading posts about masking. However, the spread of misinformation on TikTok has also had the effect of drawing in scientists and healthcare workers to combat false claims with their expertise.

    At the forefront are scientists like McSweeney, who has tirelessly posted COVID-related clips of himself on the app since last winter. McSweeney says that because even users with small followings can post videos to TikTok that gain a major audience, it’s a great way to reach new people, especially the young, who might otherwise miss important facts about the pandemic–or be exposed to misinformation. McSweeney says that homemade TikToks seem to come off as more authentic than polished videos by official organizations like the CDC. “When it’s just you in front of a camera, it’s a little bit more like a conversation,” McSweeney says.

    It’s difficult to get an exact estimate of how many healthcare workers and scientists use TikTok to talk about their work and public health issues, but they appear to number at least in the dozens. Although some of the most popular health TikTokers have become celebrities elsewhere, including dermatologist Dr. Sandra Lee (who first gained notoriety as “Dr. Pimple Popper” on YouTube) most are everyday nurses and doctors who spend their days caring for patients. Prior to the pandemic, many of them covered perennial favorite topics—such as women’s health and dermatology—but in the last year, many of their videos have turned to the pandemic—and, more specifically, to dispelling the misinformation proliferating across social media.

    To combat misinformation on TikTok, scientists like McSweeney draw upon their expertise to dissect complex science for their viewers, and back it up with real evidence. By posting videos from their living rooms on TikTok and responding to comments, they’re also able to build familiarity with their viewers. For example, Kristin Patel, a 29-year-old Illinois-based graphic designer, says that she started deliberately avoiding the news in 2020. Between what she sees as the political polarization of news sources and the ever-rising COVID-19 death count, she realized that she just didn’t want to hear any more. But McSweeney won Patel over with the way he combined scientific evidence with entertainment, and has remained a constant presence for her throughout the pandemic.

    “I think seeing Doctor Noc’s face from the beginning, I trust Doctor Noc way more than I trust NBC, or some, like, no-name reporter. I don’t know their agenda. But I know that Dr. Noc doesn’t really have an agenda, outside of science,” Patel says.

    Dr. Rose Marie Leslie, a chief family medicine resident at the University of Minnesota Medical School, frequently posts TikTok videos about health issues from her home and the hospital. Leslie, who TikTok named one of the “most impactful creators” of 2020, says it’s especially important to her to reach young people, because many of them are at a time in their lives when they’re really hungry for health information, but don’t know where to look for it and often aren’t going to the doctor frequently. Leslie aims to show young people that the decisions they make about COVID-19 can make a huge difference for their communities.

    “I just had a direct message from somebody who said, ‘I’ve been wearing my mask every single time I go out, because I’ve been watching your videos. Thank you so much.’ Just little things like that are so meaningful to me—knowing that there are people who are listening,” says Leslie. Among her most popular TikToks is a video of her getting the vaccine and sharing her experience with side effects-—just some tenderness and soreness in her arm, although she noted that there can be others, like headaches.

    @drleslie

    Honestly TOTALLY WORTH It!!! #learnontiktok #tiktokpartner #vaccine #covid19 #doctor

    ♬ original sound – Doctor Leslie

    Combating misinformation about the pandemic with younger Americans has taken on even greater urgency as the U.S. has begun to roll out vaccines—given how important those vaccines are to ending the pandemic, and how malignant anti-vaccination sentiment is in the U.S. and especially on social media. Survey data suggest that younger adults are more hesitant about getting vaccinated than older U.S. residents; only about 55% of adults 18 to 29 and 53% of those 30-49 said they definitely or probably would get a COVID-19 vaccine, compared to 75% of those older than 65, according to a Pew Research Center survey conducted in November.

    Halthcare workers like Christina Kim, an oncology nurse practitioner at Massachusetts General Hospital, have countered misinformation with their own videos taking on misinformation head-to-head as well as taking questions from their audiences. Kim, who has over 228,000 followers on TikTok, for example, posted a Dec. 13 TikTok responding to a comment from a viewer who was confused about why vaccines don’t give people COVID-19.

    @christinaaaaaaanp

    Reply to @megruss9 Keep the questions coming! #covid19 #wearamask #covidvaccine #medicaleducation

    ♬ original sound – CHRISTINA NP

    Kim tells TIME she’s thought at times about quitting the app, given the level of angry messages she’s received from people who disagree with her posts. However, she feels a sense of responsibility to fight misinformation, even if it’s just a “drop in the bucket for the pandemic on the whole.”

    “I genuinely want this pandemic to end. I want people to recognize what we need to do to make it end,” says Kim. “And I have responsibility, now with this platform that I have, I think it would almost be irresponsible to step away from that.”

  • Thu, 31 Dec 2020 16:54:35 +0000

    How Domestic Abusers Have Exploited Technology During the Pandemic

    When Julie’s boyfriend came home with a brand new iPhone for her at the end of the summer in 2019, Julie saw it as a peace offering—a sign that their relationship was on the mend.

    A few weeks earlier, her boyfriend Steve had flown into a rage, trashing the apartment they shared, punching Julie in the face and breaking her nose. He’d smashed her phone when she tried to call for help. But now, here he was with a replacement phone, and despite Steve’s past behavior, Julie convinced herself the gift was a sign things would be alright. (Julie asked TIME to use pseudonyms for her and Steve to protect her privacy.)

    She was particularly impressed that her boyfriend of two months had set up the new phone with her favorite apps and was encouraging her to get out and see friends.

    “I had never been allowed to go out and enjoy myself,” says Julie, a 21-year-old living in London. “I thought it was a change in our relationship.”

    The euphoria didn’t last. Six months later, as COVID-19 sent the U.K. hurtling into a lockdown, Julie found herself in a nightmare shared by untold numbers of domestic violence victims: trapped with an abuser who was exploiting the pandemic and using technology to control her every movement.

    Read More: As Cities Around the World Go on Lockdown, Victims of Domestic Violence Look for a Way Out

    Abusers have long used tech to spy on victims, but the pandemic has given them greater opportunities than ever before. It’s much easier to get access to a partner’s phone to alter privacy settings, obtain passwords, or install tracking software when people are spending so much time together in close proximity. For couples not in lockdown together, abusers may feel a greater need to track their partners. Survivors have also reported that their abusers are surveilling them in an attempt to gather evidence of them breaking lockdown rules and using it against them.

    Compounding the problem: it’s much harder for targets of abuse to escape as the fear of infection discourages them from moving in with relatives and friends or fleeing to shelters. And in-person counseling and other programs that serve people in abusive relationships who need help have been curtailed.

    The problem of tech abuse pre-dates the pandemic, though data is limited. The U.K.-based organization Refuge, which assists domestic violence survivors, said in 2019 that around 95% of its cases involved some form of tech abuse ranging from tracking a partner’s location using Google Maps to downloading stalkerware and spyware apps on phones. In 2019, the U.S.-based National Network to End Domestic Violence found that 71% of domestic abusers monitor survivors’ device activities: 54% downloaded stalkerware onto their partners’ devices. A study published by the Journal of Family Violence in January 2020 found that 60–63% of survivors receiving services from domestic violence programs reported tech-based abuse.

    Experts say that the pandemic has likely made the problem worse. In July, the antivirus company Avast said that after COVID-19 placed people around the world in lockdown, rates of spyware and stalkerware detection skyrocketed, increasing by 51% globally within a month of lockdowns being implemented in March. In June, the antivirus company Malwarebytes found that there was a 780% increase in the detection of monitoring apps and a 1677% increase in the detection of spyware since January. While anti-virus companies expected to see a small rise in the number of detected spyware apps due to improvements in their detection technology, the dramatic increase during lockdown was a red flag to them that abuse was increasing.

    Eva Galperin, the director of cybersecurity at Electronic Frontier Foundation, says that anti-virus companies have good reason to warn that tech abuse is on the rise—it lets them portray themselves as solutions to a dangerous problem. “Having said that, this doesn’t mean stalkerware isn’t an increasing problem,” she says, “and that they aren’t the solution.” Domestic violence organizations have reported an increase in the number of reported tech abuse cases since the pandemic began in March, corroborating the findings of antivirus companies. Some survivors have reported stealth surveillance while others have been forced to share their locations with their abusers 24/7. Refuge reports that 40% of the 2,513 tech-abuse survivors who have sought their services since the pandemic began had also experienced sexual violence and 47% had been subject to death threats

    “In lockdown, many of the women we supported were living with perpetrators of abuse, and we received countless reports of tech threats,” says Jane Keeper, the director of operations at Refuge.

    One of those women was Julie.

    When Julie, a hairdresser, met Steve on Tinder in June 2019, the connection was immediate. Within weeks, they were living together. And just weeks later, he began hitting her. Like many people in abusive relationships, Julie convinced herself that Steve would change, even as the violence became worse during their time together.

    Then he gave her the new phone. Things seemed to improve, though Julie noticed that Steve was obsessed with making sure she always carried the phone with her and didn’t let the battery die. One evening a few weeks after he gave her the phone, Julie was on a cab ride home and received a text from Steve asking her to stop at McDonalds to grab dinner, telling her she would be passing one in five minutes. “How does he know what I’m doing?” Julie remembers thinking to herself.

    She knew better than to ask him to explain. It would only make him angry. As months passed, Steve’s violent flare-ups returned, and Julie became increasingly concerned for her safety.

    Finally in February 2020, Julie felt she could no longer handle the violence and controlling behavior. She contacted police, who put her in touch with Refuge, whose tech team assessed her phone.

    “That’s when it clicked,” Julie says. “The phone was hacked.”

    Getty Images

    Steve had been using the new phone against Julie from the start. Among other things, he’d obtained her passwords to log into her social media accounts and had changed the privacy settings to track her location when she was out.

    Such tactics instil fear in a person being abused; they know that if they change their phone’s settings, it will quickly become clear to the abuser. “So you just have to let it happen,” says Julie, who blocked Steve in February, only to have him find a way to access her accounts again later when they got back together.

    Another form of tech abuse involves installing software on a device that enables someone to track and record everything, from text messages to phone calls. Steve had also done this with Julie’s phone.

    Rebecca, 42, endured yet another form of tech abuse—involving a “smart” doorbell. Rebecca learned that her ex-husband was keeping tabs on her via the camera-equipped doorbell system on the London home where she lived with the couple’s children. (Rebecca asked that TIME use a pseudonym to protect her and her children’s privacy). But Rebecca feared taking the camera down. “He would tell me, ‘if you take those cameras down, you’re compromising the security of our children and I’ll report you to the police,’” she says.

    So when the pandemic struck, Rebecca kept the cameras in place. In April, she says a neighbor saw Rebecca’s ex-husband beating her and called police. When officers arrived, the ex-husband told them he had video footage of Rebecca’s friend visiting her during the lockdown period, against coronavirus restrictions. “He used the doorbell to spy on what I was doing to try to get me in trouble with the police,” says Rebecca. (Police never followed up on the claims by Rebecca’s ex-husband that she was violating quarantine rules, she says.)

    Many countries, including the U.K., have laws against stalking, but stalkerware apps themselves generally are not illegal unless it can be proved that they marketed themselves specifically to enable abuse. In the United States, for instance, only two stalkerware companies faced federal consequences between 2014 and 2019. One was ordered to shut down their application and pay a $500,000 fine. The other was barred from promoting their products.

    Companies that market the software have a variety of means for dodging liability. Some avoid legal action by disguising themselves as parental surveillance applications. A stalkerware company that used to market itself as “Girlfriend Cell Tracker” now identifies as “Family Locator for Android,” according to Kevin Roundy, a researcher at NortonLifeLock, a cybersecurity company based in Tempe, Arizona.

    “The application has the same functionality,” Roundy says. “It was clearly designed to covertly track a girlfriend but now is saying its purpose is to keep kids safe.” Part of the problem is that app stores allow these companies to market their products on their platforms: ‘Family Locator for Android’, for instance, remains available on Google Play Store.

    Advocates say one solution would be to make it illegal for parental surveillance applications to operate in stealth mode, which leaves users of devices unaware they are being watched by an application downloaded onto their device without their knowledge. “It’s the stealth mode functionality of stalkerware that is extremely problematic and allows it to be misused,” says Galperin. “There is no reason whatsoever for companies not to have addressed this except that there is a market for it.”

    Galperin says a big challenge of getting lawmakers interested in the problem is that cybersecurity debates orbit around questions of national security, not threats to individuals.

    During the nearly one year they were together, Julie broke up with Steve at least once and even called the police on him to report the abuse. He was arrested, then released on bail, and the case was dropped. Eventually, the couple reunited—not unusual in abusive relationships, where victims are often driven by fear, financial dependence, and a genuine belief that they can fix the relationship.

    But after the U.K. went into lockdown on March 23, Julie regretted letting Steve move back in with her. “It was his perfect scenario,” she says. “He could see and watch everything I was doing.”

    Once, she sought refuge at a friend’s house. When she returned to the apartment, Steve poured bleach on her. “He said he could smell someone else on me,” Julie says. Finally in June, she broke up with Steve for good after again reporting his abusive behavior to police. They arrested Steve on domestic abuse charges, then released him on bail a few weeks later. Julie says she has not had contact with him since then.

    Julie is now free from her previous relationship, but knows many others are not. And though the pandemic makes it more difficult for survivors to seek help, Diana Freed, a PhD candidate in Computing and Information Science at Cornell Tech who volunteers at the Clinic to End Tech Abuse, says it is crucial that survivors know there are still resources available to them. Her clinic, like many organizations, has made tech abuse services and information available online, offering webinars on how to disconnect from surveillance applications or leave toxic relationships.

    For women like Julie and Rebecca, these services have been lifesaving during the pandemic. With the help of Refuge, Julie has secured all her devices and passwords as well as moved into a house with CCTV cameras installed outside. These services have helped her feel safe and secure. As the pandemic rolls on, Julie and Rebecca urge others not to delay seeking help.

    “Because I can tell you,” Julie says, “it gets more dangerous when they start tracking you.”

     

  • Sat, 19 Dec 2020 12:00:40 +0000

    What Happens Next With the Massive SolarWinds Hack

    The cyber security firm FireEye revealed that it has been the victim of a massive, long-running hack of its network. Given FireEye’s stature in the tech community, that alone would have made headlines, but the company went on to explain that the hackers were able to gain access to their system through corrupted software updates dispatched by SolarWinds, a company whose network monitoring programs are used by the vast majority of the Fortune 500; top U.S. telecom companies; every branch of the U.S. military; the departments of Justice, State and Defense; the White House Executive Office; the National Security Agency; the Department of Energy and National Nuclear Security Administration; a number of state governments and private sector actors; and many more.

    Even in a year like 2020, this is massive news.

    Why It Matters:

    This is a nightmare scenario for the U.S. government: A private sector company hired by multiple U.S. agencies was used as Trojan horse to gain access to wide swaths of some of the most sensitive data the U.S. government possesses. Cyberattacks like this are called “supply chain attacks,” where hackers hijack trusted software updates provided by legitimate companies to break into their customers’ networks. While the perpetrators have yet to be conclusively identified, the resources needed to pull off this kind of operation and keep it undetected for months—the compromised updates started going out in March and continued as recently as this past weekend—mean nation-states are the prime suspects. Given its history with these kind of attacks and the desire for payback against the NSA and CIA for past cyber operations as revealed by Edward Snowden and data dumps like Vault 7, the leading suspect is Russia. More specifically, suspicion has fallen on a group known as APT29, aka Cozy Bear, which is affiliated with Russia’s foreign intelligence service, the SVR.

    Whoever was behind it, the damage to U.S. national security (and the reputation of its key agencies that are responsible for protecting and deploying the country’s most sophisticated cyber weapons) is substantial. The hack has revealed that U.S. critical infrastructure and sensitive data remain vulnerable to threats from cyberspace. But we already knew that (see the Office of Personnel Management attacks from a few years ago); the real question is what the U.S. can do about it. And therein lies the problem.

    What Happens Next:

    For the next months (at least), the focus will be on assessing the damage done, patching up any remaining vulnerabilities, and rooting out hackers who may have used the initial breach to gain “persistent” access to sensitive networks. Rather than downloading all the critical data immediately, the attackers used their access to install additional backdoors and cover their tracks, allowing them to monitor developments over the course of the year. In other words, the hack remains “ongoing”.

    The next goal will be to determine the actual purpose of the cyberattack, which will be critical in forming the official response of the U.S. government. If it’s decided this was a more classic attempt at espionage—albeit updated for our 21st century reality—then more defensive cyber tools (like beefed-up firewalls) will be deployed in response to shore up network defenses. A Biden administration would also try do this as part of a coordinated international effort, which makes sense as SolarWinds—a publicly-traded company—has multiple international corporations and other governments as clients as well. The overall U.S. response in this scenario will be measured, part of the business of 21st century politics, and will focus on targeting individuals and entities responsible for the attack, but nothing sweeping against Russia (or whatever state) perpetrated it.

    Why not more aggressive? Two critical reasons—the first is that the U.S. has never had solid responses to existing cyberattacks given the amount of confusion inherent in them, and things can quickly escalate unintentionally in the cyber realm. The second, and arguably more critical reason, is that the U.S. engages in similar activities, and escalating the response also runs the risk of exposing covert U.S. activities under way.

    That doesn’t mean foreign adversaries aren’t keeping a close eye on the response. While the timing of the attack wasn’t intended to target the incoming Biden administration as it was first launched months ago, its exposure on the cusp of Biden assuming office means that how the new administration team responds will set the tone for the next four years of cyber competition. In addition to shoring up defenses, network defenders have already begun targeting the SolarWinds hackers’ command-and-control systems, by seizing IP addresses used in the operation. At the organizational level, look for a White House cyber czar to be coming back, a position that was cut during John Bolton’s tenure at the National Security Council. That makes sense given the need for coordination across the government as the U.S. braces for more of these types of hacks, both because of the growing sophistication of hackers (and the tools they’ve stolen over the years, both the newly disclosed theft from FireEye and the earlier theft of hacking tools from the NSA which were later leaked by a group known as the Shadow Brokers) and because there are just evermore digital targets as our lives and huge chunks of the global economy are increasingly ported over to cyberspace.

    But if it’s determined that the hackers were after critical infrastructure (with the potential of costing American lives) or to kneecap U.S. industries, then the response gets more serious and aggressive. We’re just unlikely to hear about it. That’s because…

    The One Major Misconception About It:

    The U.S. is not engaging in the same kinds of cyber operations against our adversaries. Don’t believe it. The U.S. has the same, if not greater, offensive capabilities than other nation states out there. But cyberspace isn’t like more traditional domains of conflict, where you want your adversary to know you have the bigger and better weapon to act as a deterrent; it’s wiser to keep your most advanced capabilities under wraps. Another reason you don’t hear about U.S. cyberattacks? Because many of the countries that are the targets of U.S. cyber operations—Russia, China, and North Korea—are authoritarian regimes that would never publicize their failures. In the U.S., exposing hacks like this leads to short-term political embarrassment, but also stronger cyber systems over the long run as key weaknesses are addressed. Think of it as the inherent long-term tech advantage of operating in an open political system.

    The One Thing to Say About It on a Zoom Call:

    America’s reliance on the private sector, one of its greatest strengths in a traditional economy, is also the source of one of its biggest vulnerabilities in the digital world if left unaddressed. SolarWinds just proved that; what’s left to be seen is how well the government can adapt to this new reality. Yet one more urgent thing on Biden’s plate come January 20th.

  • Fri, 18 Dec 2020 18:02:50 +0000

    U.S. Blacklists More Than 60 Chinese Firms, Including Drone Giant DJI

    The U.S. Commerce Department announced it’s blacklisting Semiconductor Manufacturing International Corp., drone maker SZ DJI Technology Co. and more than 60 other Chinese companies “to protect U.S. national security.”

    “This action stems from China’s military-civil fusion doctrine and evidence of activities between SMIC and entities of concern in the Chinese military industrial complex,” the Commerce Department said in a statement.

    Commerce Secretary Wilbur Ross confirmed the move in a Friday morning interview with Fox Business. It was reported first by Reuters overnight. Shares in SMIC, China’s top chipmaker, slid 5.2% Friday in Hong Kong on the news.

    Other affected Chinese entities include those “that enable human rights abuses, entities that supported the militarization and unlawful maritime claims in the South China Sea, entities that acquired U.S.-origin items in support of the People’s Liberation Army’s programs, and entities and persons that engaged in the theft of U.S. trade secrets,” according to the U.S. government statement.

    “There’s plenty in the open press about how DJI has been part of the surveillance state and overall suppression within China,” a senior Commerce official said.

    The majority of the newly banned companies are Chinese and will join the likes of Huawei Technologies Co. on a list that denies them access to U.S. technology from software to circuitry.

    Companies including Huawei and SMIC have been caught in the middle of worsening tensions between the world’s two largest economies, which have clashed on issues from trade to the pandemic.

    President Donald Trump had been widely expected to level more sanctions against China’s national champions before Joe Biden formally took office.

    Chinese Foreign Minister Wang Yi called the U.S.’s expansive use of sanctions against Chinese companies “unacceptable” in a video address to the Asia Society on Friday. He urged the U.S. to stop “over stretching the notion of national security,” and “the arbitrary suppression of Chinese companies.”

    Shanghai-based SMIC, a supplier to Qualcomm Inc. and Broadcom Inc., lies at the heart of Beijing’s intention to build a world-class semiconductor industry and wean itself from reliance on American technology. Washington in turn views China’s ascendancy and its ambitions to dominate spheres of technology as a potential geopolitical threat. A blacklisting threatens to cripple SMIC’s longer-term ambitions by depriving it of crucial gear.

    For U.S. companies exporting items to SMIC for making 10-nanometer or more advanced chips, their applications for a license will face “presumption of denial,” while items for producing chips more mature than 10-nanometer will be reviewed on a case by case basis, according to a senior Commerce official.

    Companies exporting parts made outside of the U.S. to SMIC will face certain restrictions depending on how much of their technologies are U.S.-origin, and Washington is talking to “like-minded governments” about forming a unified approach to the Chinese chipmaker, senior Commerce officials said. They declined to give details on which governments the U.S. is talking to and potential implications on non-U.S. companies like ASMl Holding NV and Tokyo Electron Ltd. that also supply equipment for making advanced chips.

    In response to the widening U.S. crackdown, China is planning to provide broad support for so-called third-generation semiconductors in its next five-year plan to increase domestic self-sufficiency in chip manufacturing, people with knowledge of the matter have said. SMIC, backed by the China Integrated Circuit Industry Investment Fund as well as Singapore’s sovereign fund GIC Pte and the Abu Dhabi Investment Authority, is expected to play a central role in that overall effort.

    SMIC representatives didn’t respond to requests for comment. The company had already been laboring under similar, less severe curbs after the Commerce Department in September placed it on a separate export restrictions list, accusing SMIC of supplying the military. Those sanctions took a toll on shares of the company, whose co-CEO Liang Mong Song this week unexpectedly resigned, triggering another selloff.

    —With assistance from Jing Li and Peter Martin.


Sports, Adventure, Health, Fitness, Travel, and even more news for Texans, right here!  Follow us @AASNSports on all social media platforms. 


Member

Untitled photo
Consider becoming a VIP Patron. Support local journalism. Any amount matters. Donations are processed through PayPal. We're in this together.

Powered by SmugMug Owner Log In