Jump to content

Search the Community

Showing results for tags 'tech'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Categories

  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions

Categories

  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 13 results

  1. go to official page of TWRP recovery by clicking on this link https://twrp.mec/ to download twrp recovery according to your device and model name. if in list of device name your device name not show than you need to download from other third party website. Now, copy this .img file which you just downloaded and paste it in folder of your computer and rename it to twrp.img.
  2. Here are the five biggest tech disappointments of 2020 With the year drawing to a close, it's a good time to view the year in tech in retrospective. While we had some truly amazing technology grace our lives this year - and you can have a look at some of it here -, 2020 also had its fair share of disappointing tech products, services, and happenings in general. In this piece, we'll look at five of the biggest tech disappointments of 2020. Note that this is very much dependent upon personal preferences so you obviously may not feel as strongly about a product as we do, and vice versa. Furthermore, the failures that we highlight may not necessarily be commercial flops, and may have their ardent customers and fans for various reasons. With that out of the way, let's begin! 1 - Microsoft Surface Duo Following much fanfare, Microsoft finally took the wraps off its highly anticipated "Surface Phone" last year, under the "Surface Duo" moniker. The dual-screen hinge design as well as Microsoft's penchant for building premium Surface devices had many hopeful that this would be the time the company finally nails the mobile category with a truly revolutionary device that amplifies productivity. Coupled with the fact that it would run Android rather than some new Microsoft OS meant that it would enjoy a huge library of apps too - unlike Windows Phone, where the "app gap" became the bane of its existence. Well, the Surface Duo finally landed in August this year, housing a Snapdragon 855 processor, 6GB of RAM, 128/256GB storage, a 3,577 mAh battery, a very mediocre camera, and two 5.6" displays held together by a hinge. Apart from the specs being nothing to write home about with an outdated processor and a very mediocre camera, the things that really disappointed about the Surface Duo on paper were its price tag of $1,399 and its U.S.-only availability. As our Senior Editor Rich Woods wrote back then as well, the Surface Duo was tragically overpriced. With foldable phones priced around $1,500 and above, you would think that Microsoft's offering fits in neatly in this niche, but an important distinction is that other devices in this price bracket offer a foldable display, rather than two displays held together by a hinge. In fact, the LG V60 variant which comes with a dual-screen accessory, Snapdragon 865, 8GB of RAM, 128GB of storage, and a 5,000 mAh battery is priced at $899, which is considerably cheaper than the Surface Duo. On paper, the V60 is a much better value offering than the Duo. Of course, the next argument from ardent supporters was that the Duo offers premium build quality and an Android OS that is customized by Microsoft for this device. While the first argument definitely holds weight, the second fell extremely short when reviews of the Duo started pouring in, with the most scathing ones pointing out the numerous software bugs and glitches present in the device that actually result in decreased productivity in day to day tasks, rather than the opposite. Although all of this is expected from a first-generation device that largely appears to be experimental - and we've seen Microsoft turn things around with the Surface before as well -, it was still disappointing to see the Duo fail to live up to the hype that it had built over the past couple of years. This is one device that many believed would allow Microsoft to establish a foothold in the saturated mobile market, but clearly, this is not the case. With the company offering three years of feature updates for the Duo as well as finally expanding availability to more markets, one would hope that the company has learned its lessons from this release and things will improve both for this device and other potential successors. This particular device category certainly has promise, but can't live up to it right now, and definitely not at this price. 2 - Cyberpunk 2077 on consoles Yet another heavily hyped product that failed to live up to expectations this year, to some extent. While development on Cyberpunk 2077 started back in 2012, it was not until the completion of The Witcher 3's DLCs in 2016 that CD Projekt Red (CDPR) went all-in on its upcoming AAA title. The title was marketed as a truly groundbreaking gaming experience, the likes of which players had never seen before. The game was delayed multiple times, but was finally released on December 10, 2020. Let's be clear here from the get-go: Cyberpunk 2077 is not a bad game by any means. According to most reviews and feedback from players around the world, it delivers a gripping narrative, decent gameplay, and a beautifully designed vertical open world. The problem is that it doesn't offer a polished experience right now, not even close to one. On PC, many players are experiencing bugs and crashes, but they still seem to have the better end of the stick as compared to console players. When it launched, players on last-gen consoles, namely PlayStation 4 and Xbox One, were treated with a myriad of issues, with the most glaring one being the game struggling to hit 30fps, and mostly hovering around the 20fps and below mark. It is important to note that this happens while maintaining a resolution of below 1080p on both consoles. On PlayStation 4, it hovers at 900p and lower, and the Xbox One runs at a similar resolution too. PS4 Pro and Xbox One X fare only slightly better. The situation is only slightly better on next-gen consoles such as PS5 and Xbox Series X|S. While the title runs at a better resolution, it is barely able to maintain the ideal 60fps target. Other issues such as bugs and crashes still persist across console generations. Thousands of people have complained about the state of Cyberpunk 2077 on consoles, claiming that CDPR intentionally misled players by not showing off gameplay on consoles providing no console copies to reviewers. The company has since issued an apology, accepting that the game is not in a decent condition especially on last-gen consoles, and that it will be releasing major patches in 2021 to address concerns. However, it has emphasized that the patches "won’t make the game on last-gen look like it’s running on a high-spec PC or next-gen console, but it will be closer to that experience than it is now". Following this public apology, Sony removed Cyberpunk 2077 from the PlayStation Store, offering full refunds to anyone who wants them. Microsoft has offered refunds for digital Xbox copies of the game as well, but has not delisted it. Since the title's launch, CDPR's stock has dropped by almost 40%. That said, it is important to note that Cyberpunk 2077 is apparently not a failure in terms of units sold. Recently, CDPR revealed that it has managed to sell 13 million copies, after removing refunds from the equation. One can only hope that the firm manages to pull itself together and turn things around sooner rather than later. 3 - Quibi We wouldn't be surprised if you told us that you haven't heard of Quibi before. The name is a play on the "quick bites" phrase, where the platform offered short video content and shows to mobile users only. It launched on April 6, 2020, with a free 90-day trial and two subsequent subscription plans: $4.99/month with ads or $7.99/month without them. Although it received 1.7 million downloads in its first week, reception ranged from mixed to negative, with some praise directed at original shows. This made sense given that this is already a market mostly captured by TikTok, YouTube, and other services, which offer free alternatives. And if Quibi was targeting users who had subscribed to the likes of Netflix or Disney+, it didn't have the brand recognition or content to justify its price. This was further confirmed when reports indicated that 90% of roughly one million people who signed up for a free trial in the first three days after launch did not renew their subscription after its expiration. Following its lukewarm reception, Quibi announced that it is shutting down on December 1. The company stated that it is closing its doors "because the idea itself wasn’t strong enough to justify a standalone streaming service or because of our timing". The latter is a bit difficult to believe since we are in the middle of a pandemic in which people have more time on their hands to stream shows, especially ones which have episodes less than 10 minutes in length. This was rather a quick end for a company which raised $1.75 billion from investors such as Walmart, PepsiCo, and Anheuser-Busch. It would be unfair to say that Quibi failed for lack of trying since it did try to chase original programming, and was actually nominated for 10 Emmy awards, two of which it won. Many claim that it failed purely because it spent too much on advertising and was focused on mobile users only rather than offering a broad platform. Farewell Quibi, we hardly knew ye. 4 - Apple's stance on cloud game-streaming Microsoft finally brought its Project xCloud cloud game streaming service to Xbox Game Pass Ultimate this year, allowing people to play console games using just their Android smartphones and tablets. The key concern here is "Android" only, when it comes to mobile platforms. Microsoft fully intended to bring this service to iOS devices as well but faced stiff resistance from Apple, which claimed that it won't allow it because the company claimed it wouldn't be able to review the games being offered, and would have no control over what content is made available to users. This is a very weird argument because it doesn't translate well to content streaming services like Netflix or Disney+, which also offer different content in the same way without moderation from Apple. Microsoft isn't the only company facing problems on this end. Google Stadia and Nvidia's GeForce Now are unavailable on iOS for the same reason as well, although the former worked out the issue by releasing a dedicated Progressive Web App (PWA). Following criticism on its stance, Apple did backpedal a bit in September by offering an olive branch. Kind of. It modified its rules to allow game-streaming with some caveats; game-streaming providers can still offer catalog apps, but they'll have to link to the individual games in the App Store. Moreover, they'll have to allow users to sign up for the service in the app, meaning that Apple gets a 30% cut, and they'll have to support Sign in with Apple. Microsoft promptly responded to the move, stating that it offers a poor customer experience. The company went on to say that: Gamers want to jump directly into a game from their curated catalog within one app just like they do with movies or songs, and not be forced to download over 100 apps to play individual games from the cloud. We’re committed to putting gamers at the center of everything we do, and providing a great experience is core to that mission. Unfortunately, Apple has refused to budge further, and Microsoft isn't willing to comply with Apple's restrictive rules either. As such, it has been rumored that the Redmond tech giant is working on an alternate solution to circumvent Apple's policies. It is reportedly planning to introduce a browser-based web app - à la Google Stadia - that will directly bring xCloud game streaming to iOS next year. This was allegedly confirmed by Microsoft's gaming head Phil Spencer at an internal meeting with employees, in which he stated: "We absolutely will end up on iOS". 5 - Zoom's privacy and security fiasco Online communication platform Zoom had an excellent year when it comes to stock prices and usage in 2020. This was mostly thanks to the ongoing pandemic in which people were forced to work from home, observe social distancing, and adjust to the "new normal", where most formal work communication has to be done remotely. However, the company also had a string of issues related to user privacy and security, which were all the more serious because of the number of people using the platform this year. While the firm dealt with each problem as it emerged, it was in the headlines frequently, and usually not because of good reasons. In March, it was revealed that Zoom sends analytics data to Facebook, even if the user doesn't have a Facebook account, a security gap which the company plugged by removing the problematic SDK. Then in April, a major vulnerability was disclosed which allows hackers to steal a user's Windows credentials. SpaceX immediately prohibited its employees from using Zoom in official capacities. Following this fiasco, Zoom vowed to fix privacy and security flaws in a 90-day feature freeze. During this period, it updated its macOS pre-installation scripts because it was revealed that they contain abusive behavior that circumvents Apple's OS restrictions and installs the software on Macs without proper user consent. It also enabled waiting rooms by default to prevent "zoombombing". While it was fixing these issues, Zoom developed a negative press image as a software that is insecure and violates user privacy. Two U.S. state attorneys started looking into Zoom's privacy practices, while a shareholder sued the company. Google banned its use on employee laptops, Singapore prohibited teachers from using it, Standard Chartered told employees to stay away from it, and the Indian government called it an "unsafe" platform for video conferencing. The company also made a misleading claim that it has "more than 300 million daily users", a phrase that it backtracked on and changed to "surpassing 300 million daily Zoom meeting participants". While things have been better for the video conferencing app on the privacy and security front in the second half of 2020, with the company finally rolling out support for end-to-end encryption, it was certainly disappointing to see a mainstream service, being used daily by millions of users around the globe amidst the pandemic, be so lax when it comes to ensuring the safety of its online user base. Here's to hoping for a stronger showing next year with the COVID-19 vaccines on the horizon and competitors like Microsoft Teams and Google Meet upping their game as well. Those were our picks for the top five tech disappointments of 2020, but we're certain that there may have been other things which ticked you off as well. Managed to grab a next-gen console yet or did you have to submit to the will of malicious scalpers? We have highlighted what we personally consider the biggest tech disappointments this year, but feel free to let us know your thoughts in the comments section as well! Here are the five biggest tech disappointments of 2020
  3. Here are the five coolest tech innovations of 2020 As we inch ever closer to the end of the year, it's always fun to look back at the cool things that happened in the past 12 months. Sure, this year may have had more than its fair share of negative moments, but in the technology world, things kept moving and we saw a lot of cool products make it to the market. In this article, we're taking a look at the five coolest innovations in technology that we saw this year. Before we start, let us clarify that the coolest innovations aren't necessarily the best products, but rather the products that shake up the industry, change paradigms, or are just straight-up fun. It's also worth remembering that preferences are personal, so what we find the coolest may not necessarily be what you find the coolest. We'll have some honorable mentions to help us cover more ground, but if there's something we overlooked, make sure to let us know in the comments. 1 - Lenovo ThinkPad X1 Fold Foldable phones were propelled into the spotlight in 2019, but Lenovo was one of the few companies, if not the only, to be brave enough to show us a foldable PC. In 2020, we finally saw that idea materializing with the ThinkPad X1 Fold, and the concept is incredibly cool, with a solid execution to boot. The ThinkPad X1 Fold has a 13.3-inch display, just like a lot of regular laptops you could find anywhere else, but that screen can fold, so you can use it as an ultraportable 9.6-inch laptop, with one half of the display being the actual screen, and the bottom half serving as a keyboard. In fact, Lenovo also sells a keyboard that snaps onto the lower half to create a true laptop experience, and the interface can adapt when you do that. Lenovo thought of pretty much everything - the keyboard is Bluetooth so you can use it when the display is unfolded to use its 13.3-inch display; the display has a kickstand for when you have it unfolded, and when it's closed, the whole device has a cool leather case that makes it look like a classy notebook. It really shows the potential for foldable PCs, and it's an amazing starting point for the technology. Where it falls flat is its pricing and the fact that, at the end of the day, it's running Windows 10, and the OS just can't keep up with the form factor. It's a first-generation product with first-generation problems, and while you probably shouldn't buy it, Lenovo has us excited about what could come next. There are some honorable mentions for foldables, though, because the smartphone world also kept evolving this year. Samsung introduced the Galaxy Z Flip (and Z Flip 5G), plus the Galaxy Z Fold2, both featuring the new Ultra-Thin Glass (UTG) that can actually fold while being more rigid than simple plastic. The Z Fold2 also saw major design improvements with a big outer display and better durability. Of course, the price on these products is yet to come down, so there's still work to be done before folding phones become mainstream. 2 - LG Wing I suspect this will be a controversial pick, but the LG Wing is a really cool idea in my opinion. For years, all we've seen in smartphones are flat slabs, and even with foldables coming into the picture, we're just kind of missing some of the fun of yesteryear in the phone world. The LG Wing is really when it set in for me that companies are trying to do cool things again. Sporting two displays on top of each other, the LG Wing features a funky design with a T shape, if you want to use both screens. At first, it looks like a regular smartphone, but give it a flick, and the top display goes horizontal, unveiling a smaller display underneath. These two screens can be used for multitasking, like texting on the bottom screen while watching a full-screen video on the horizontal display. LG also went out of its way to create a slightly different home screen for that horizontal display so it feels that much more intentional and natural. Aside from the screens alone, LG also baked in some cool features, like a legitimate gimbal camera that makes use of the T-shaped design to control the camera angle and frame shots on the fly. I'm sure someone might argue in defense of the Surface Duo, which is also a dual-screen phone, but that phone feels, to me, like it's capitalizing a lot more on the existing foldable smartphone concept, and it does a lot less that's unique to it. We've also seen other dual-screen phones with some similarities, like the ZTE Axon M a few years back. The Surface Duo has definitely evolved the concept a good amount, but it doesn't stand out as being as cool as the LG Wing, and it also didn't pack the specs or polish to justify its exorbitant price tag (even more so than the LG Wing). Still, it's worthy of an honorable mention. I'd also like to give a very quick shoutout to the ZTE Axon 20 5G for being the first phone with an under-display selfie camera. I almost gave it its own spot on this list, but the camera tech is really the only thing that's unique about this phone, and it's not as cool to see in action as the phones mentioned above. 3 - Xbox Game Pass Ultimate cloud streaming Cloud gaming isn't really new this year, with services like PlayStation Now and GeForce Now being available for a while, plus Google's Stadia launching last year. But Microsoft came out with a service that's available at no extra cost if you have Xbox Game Pass Ultimate, giving you more ways to play an already huge library of games. That means that, not only do you get a wide array of PC and Xbox games by subscribing to Game Pass Ultimate, you now get the ability to play the majority of those games on your phone, wherever you go, and at no extra cost. Xbox Game Pass was already an incredible value, including the vast majority of games from Microsoft's studios, plus a ton of games from other publishers like Destiny 2, Tetris Effect, Doom Eternal (though that's now owned by Microsoft), and more recently, the EA Play library on Xbox. Adding cloud streaming on top of that not only reinforces Xbox Game Pass's value as the best value in gaming, but also makes it the best cloud streaming service you can subscribe to. The biggest drawback of Game Pass Ultimate cloud streaming is that it's still only available on Android right now, and competitors like Stadia or GeForce Now support a few more platforms. Still, Microsoft's service is definitely the coolest. 4 - PlayStation 5 Of course, it's impossible to talk about this year without mentioning the next generation of consoles, starring Sony's PlayStation 5 and Microsoft's Xbox Series X|S. While each console has slightly different strengths and weaknesses, both brought very similar innovations, including being some of the first pieces of hardware to pack RDNA2-based GPUs and perhaps most notably, blazingly fast loading times courtesy of SSD storage and DirectStorage technology that even PCs weren't able to leverage before. The thing is, most of that is predictable innovation, simply improving and implementing better ways to do the same things. The PlayStation 5 gets the edge for having the cool factor, and that's mostly thanks to its controller, called DualSense. While we'll have to see how these technologies are implemented in future games, new technologies like the improved rumble and the adjustable tension on the shoulder triggers bring new ways to immerse players in gaming experiences, and that's definitely a commendable effort. Sony also invested a lot in 3D audio with its Tempest Engine promising to support "hundreds of sound sources", meaning you can hear sounds as if they're coming from many different places. Sony even acknowledged that everyone's head is different, and it suggested that it may let users scan their own head to help generate custom 3D audio profiles in the future. On top of that, Sony actually brought new experiences that sold the PlayStation 5, such as Spider-Man: Miles Morales (though this one works on PlayStation 4) and Demon's Souls, while Microsoft didn't have anything of its own to make its new consoles really shine after Halo Infinite was delayed. On the other hand, Microsoft didn't have to do much to the Xbox Wireless Controller because it was already immensely comfortable (a long-standing flaw with Sony's DualShock series), and it makes up for the lack of new titles with unrivaled backward compatibility. Nearly every game and accessory that worked on Xbox One will work on the Xbox Series X|S consoles, meaning your entire backlog will work on the new consoles - and better than ever, at that. Still, access to new experiences is a bit more exciting than keeping old ones for most people, and nothing here is radically different from what we've seen before. The PlayStation 5 definitely has more unique features that make it the pick for this spot. 5 - Apple Silicon For years, Apple was rumored to be making the switch from Intel processors to its in-house ARM-based processors, and in June, the Cupertino giant finally brought those rumors to life with the announcement of Apple Silicon. The company's first chipset, the Apple M1, was then announced with three new products to go with it - the new MacBook Air, MacBook Pro (13-inch), and Mac Mini, all promising huge performance gains over the previous models, and some of them with just passive cooling. The company really didn't fail to deliver, and most benchmarks we saw after the products launched reaffirmed that Apple Silicon could truly supplant Intel in terms of performance, at least in these compact devices. Apple wasn't the first to bring ARM-based computers to market, but it might as well have been. While Windows on ARM slogged in the first years of its life and - even after getting a purpose-built chipset in the form of the Snapdragon 8cx - failed to get much traction among users and developers, Apple Silicon has already prompted numerous major apps to add support for the new architecture. Adobe's Creative Cloud suite has started adding support for the new chips within weeks; meanwhile, it announced support for Windows on ARM PCs over a year ago and still hasn't fully delivered on it. On top of that, Apple's Rosetta 2 translation layer didn't have as many performance drawbacks as x86 emulation for Windows on ARM, so even without official support, most apps run fairly well on ARM-based Macs. While the future is still uncertain for Apple Silicon Macs that require more power or upgradability, the Apple M1 was a stellar start and it let everyone know that the company can in fact move away from Intel. I'm certainly excited to see what other Apple Silicon Macs will look like, and if Qualcomm/Microsoft can bring forth something that competes toe-to-toe with Apple. Those are the products we considered most innovative or groundbreaking this year, but there's a lot more that could have people excited in 2020. In the PC world, Nvidia, AMD, and Intel all brought exciting new products to the table that promise even more performance, and over on the phone side of things, Apple's iPhone 12 was the biggest design change the company has done in a few years. We highlighted our favorites, but let us know yours in the comments! Here are the five coolest tech innovations of 2020
  4. If artificial intelligence will destroy humanity, it probably won’t be through killer robots and the incarnation—it will be through a thousand paper cuts. In the shadow of the immense benefits of advances in technology, the dark effects of AI algorithms are slowly creeping into different aspects of our lives, causing divide, unintentionally marginalizing groups of people, stealing our attention, and widening the gap between the wealthy and the poor. While we’re already seeing and discussing many of the negative aspects of AI, not enough is being done to address them. And the reason is that we’re looking in the wrong place, as futurist and Amy Webb discusses in her book The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. Many are quick to blame large tech companies for the problems caused by artificial intelligence. They’re not wrong. A few very wealthy organizations wield enormous power on how AI systems are being developed and deployed across thousands of applications and delivered to billions of devices and users. And by extension, they are responsible for many of the problems we are facing, from algorithmic bias and social media filter bubbles to privacy issues and lack of diversity. These companies, however, are not inherently evil and not alone to blame for the broken state of AI, Webb argues in The Big Nine. The problems run much deeper in the underlying systems that push these companies to work as they do. And if we don’t fix the problems at the root, the consequences can be disastrous. In The Big Nine, Webb provides a comprehensive layout of the current problems of the AI industry, an outlook of what can happen in the future, and a roadmap for setting the industry on the right path. G-MAFIA vs BAT: The overlords of artificial intelligence The Big Nine is a reference to nine big tech companies who have the lion’s share of what is happening in artificial intelligence. Six of them are in the United States: Google, Facebook, Microsoft, Apple, Amazon, and IBM. Webb collectively calls them the G-MAFIA and describes them as “a closed supernetwork of people with similar interests and backgrounds working within one field who have a controlling influence over our futures.” The three remaining companies are Chinese tech giants Baidu, Alibaba, and Tencent, who are already well-known as BAT. “I firmly believe that the leaders of these nine companies are driven by a profound sense of altruism and a desire to serve the greater good: they clearly see the potential of AI to improve health care and longevity, to solve our impending climate issues, and to lift millions of people out of poverty,” Webb writes. But the problem is that the Big Nine are being pushed by external forces—often inconspicuously—that are pressuring them to work in ways that are against their best intentions. The cultural problems of AI companies “The future of AI is being built by a relatively few like-minded people within small, insulated groups,” Webb writes. “[As] with all insulated groups that work closely together, their unconscious biases and myopia tend to become new systems of belief and accepted behaviors over time.” And this like-mindedness starts in the universities where big tech companies recruit their talent, and where the pioneers of AI hailed from. In U.S. universities, computer science programs are mostly focused on hard engineering skills, programming, systems engineering, math. When it comes to AI, students focus on machine learning algorithms, natural language processing, computer vision, and other technical skills. There’s little room for anthropology, philosophy, and ethics. Those topics are often overlooked or included as optional. Thirty years ago, when algorithms were still not too dominant in our lives, this would not be much of a problem. But today, AI is slowly but surely finding its way into in sensitive areas such as processing loan applications and making hiring decisions. And in these situations, the algorithms reflect the unconscious biases, preferences, and blind spots of the people who are creating them. The people who develop AI at big tech companies usually come from similar social backgrounds, demographics, ethnicities, and religions. Consequently, their products often disadvantage or leave out certain groups of people. This is why we regularly see AI scandals such as an Amazon hiring algorithm that discriminates against women, an IBM face detection system that performs poorly on non-white female faces, a Google algorithm that mislabels images of black people, and an Apple algorithm that disadvantages women in credit scoring. Fortunately, all these events were followed by quick apologies and fixes issued by the respective companies. Unfortunately, most of them were found when someone stumbled on them by chance. What we don’t know is the many other hidden ways AI algorithms are discriminating against people without them knowing it. They are paper cuts, causing small disadvantages that might go unnoticed to the individual, but can have massive effects at scale. And when the people who are creating the AI systems are blind to their own biases, they surely won’t know where to look for problems. Why don’t universities fix their programs? Because technology is moving faster than academia. “A single, required ethics course—specifically built for and tailored to students studying AI—won’t do the trick if the material isn’t current and especially if what’s being taught doesn’t reverberate throughout other areas of the curriculum,” Webb writes. And universities can’t press pause to rethink and restructure their courses. “Universities want to show a strong record of employed graduates, and employers want graduates with hard skills. The Big Nine are partners with these universities, which rely on their funding and resources,” Webb writes in The Big Nine. But why don’t tech companies change their norms and criteria? The profit-driven AI market Throughout its history, AI research has gone through a series of summers and winters, periods of hype and excitement (and a lot of money thrown at AI research) followed by disillusionment and drying out of funding when the technologies failed to deliver on their promises. The success of deep neural networks in recent years have rejuvenated interest in the field of AI. But research on neural networks is extremely expensive and requires vast amounts of data and compute resources. The mounting costs of deep learning research have pushed AI scientists and research labs into the arms of large tech companies. The deep pockets of tech companies allow scientists to continue their research. But these companies are also driven by market forces and expect return on their investment. “There is tremendous pressure for the G-MAFIA to build practical and commercial applications for AI as quickly as possible. In the digital space, investors have grown accustomed to quick wins and windfalls,” Webb writes in The Big Nine. The direct result of this drive is the premature and hasty release of “AI-powered” products to the market, which means developer don’t have time to weigh in on the negative ramifications. But the less noticed consequence is the commercialization of AI research. Scientific research labs are required to direct at least part of their resources to create profitable products so that they can keep their investors happy and secure the next round of funding. We’ve already seen this happen with the UK-based DeepMind, acquired by Google in 2014, and the San Francisco–based OpenAI, which is receiving its funding from Microsoft. DeepMind created an “applied” division which is working on commercial AI products. OpenAI has pledged to license its “pre-AGI” technologies to its investors, which only includes Microsoft for the moment. Why aren’t tech companies and their use of AI regulated? “In the United States, the G-MAFIA wield significant power and influence over government in part because of America’s market economy system and because we have a strong cultural aversion toward strong government control of business,” Webb writes. But the situation is growing increasing dangerous as AI and the technology created by the G-MAFIA continue to permeate every aspect of our lives. Per Webb: “Sometime in the next decade, the rest of the AI ecosystem will converge around just a few G-MAFIA systems. All the startups and players on the periphery—not to mention you and me—will have to accept a new order and pledge our allegiance to just a few commercial providers who now act as the operating systems for everyday life. Once your data, gadgets, appliances, cars, and services are entangled, you’ll be locked in. As you buy more stuff—like mobile phones, connected refrigerators, or smart earbuds—you’ll find that the G-MAFIA has become an operating system for your everyday life. Humanity is being made an offer that we just can’t refuse.” The AI-powered surveillance machine n China, where the state is using every tool at its disposal—including AI—to consolidate its power, the situation is very different but no less dangerous. The Chinese government well understands the implications and potential of advanced AI, and it has already laid out a roadmap to achieve AI domination by 2030. Contrary to the U.S., in China, the government exerts full control over AI companies. BAT are legally obliged to put all of their data at the disposal of authorities and enable the state to conduct mass surveillance and control citizens through their technologies. One of the best-known instances of the government’s initiatives is the infamous Sesame social credit system, which employs AI algorithms and the platforms of BAT to keep a close watch on the behavior of Chinese citizens. The system is supposed to incentivize good behavior, such as abiding by the rules and keeping a good banking record, while punishing bad behavior such as playing video games late into the night and jaywalking. But it is also a tool to keep an eye on political dissidents and marginalize those who are not aligned with the views of the ruling party. What’s in it for BAT? “State-level surveillance is enabled by the BAT, who are in turn emboldened through China’s various institutional and industrial policies,” Webb writes. This is why you see the flourishing of the three companies, which together have a vast share of China’s economy. Webb also spells another warning that is often ignored: the AI brain drain caused by Chinese initiatives. “China is actively draining professors and researchers away from AI’s hubs in Canada and the United States, offering them attractive repatriation packages,” she writes. “There’s already a shortage of trained data scientists and machine-learning specialists. Siphoning off people will soon create a talent vacuum in the West. By far, this is China’s smartest long-term play—because it deprives the West of its ability to compete in the future.” What happens if we don’t fix AI? “AI’s consumerism model in the United States isn’t inherently evil. Neither is China’s government-centralized model. AI itself isn’t necessarily harmful to society,” Webb writes. “However, the G-MAFIA are profit-driven, publicly traded companies that must answer to Wall Street, regardless of the altruistic intentions of their leaders and employees. In China, the BAT are beholden to the Chinese government, which has already decided what’s best for the Chinese.” And what’s best for Wall Street and the Chinese government is not necessarily in the best interests of humanity. As we’ve discussed, we’re already bleeding from many paper cuts, and the situation will gradually grow worse if AI research and development is not steered in the right direction. “It’s difficult to wrap our heads around potential crises and opportunities before they’ve already happened, and that’s why we tend to stick to our existing narratives. That’s why we reference killer robots rather than paper cuts. Why we fetishize the future of AI rather than fearing the many algorithms that learn from our data,” Webb warns. In The Big Nine, Web lays out three potential roadmaps for the future of AI, two of which are disastrous. n the “pragmatic scenario,” AI stakeholders will acknowledge problems but will only make minor changes. In the U.S., the government and G-MAFIA will not come together to make sure AI benefits everyone. The paper cuts will increase. Adversarial attacks, reward-hacking, incomplete AI systems, and algorithmic discrimination will continue to harm users across the world. Worried or not, the companies creating AI systems won’t do much because they are under the constant pressure of getting products to the market. People will lose ownership of their data, their privacy, their identities. The social and economic divide will continue to grow. Technological and economic power will continue to consolidate in very few companies, who will continue to compete for user attention and monetization potential and will bombard us with ads everywhere. “Rather than bringing us together, AI has effectively and efficiently split us all apart,” Webb warns. Meanwhile, in China, the government will continue to exert centralized control and use AI to consolidate its power. It will use its leverage to apply AI to its security and military apparatus and move toward developing human-level AI. It will eventually launch subtle AI-powered attacks and take the digital infrastructure of the U.S. as hostage. “Humanity is on the brink of a terrifying ASI [artificial super intelligence] that has been developed by a country that does not share our democratic values and ideals,” Webb warns. In the “catastrophic scenario,” the G-MAFIA will continue their unabated competition and will eventually establish their own version of China’s social score on citizens in different countries. People lack power to decide the smallest things in their lives. The G-MAFIA will cause a divide among the people as everyone becomes locked into one of few incompatible platforms that expand on all aspects of their lives. AI will influence the social fabric. “America and its allies, who once celebrated in the G-MAFIA’s successes, are living under a system of AI totalitarianism,” Webb writes. “Citizens throughout China and all the countries supported by China’s direct investment and infrastructure find that they, too, are living under a pervasive apparatus of AI-powered punishment and reward.” China expands its AI dominion by exporting its technology and surveillance capabilities to other countries. Those countries inevitably become satellite states of the Chinese Communist Party and part of its AI-powered surveillance regime. The adversity between China and U.S. allies reaches a head when one of the parties develops super intelligent AI and annihilates the other. GAIA: The plan to set AI on the right course Not all is gloomy. In her book, Webb provides a series of steps that can set AI on the right course and make sure it will benefit all of humanity. Key among them is the formation of the Global Alliance on Intelligence Augmentation, or GAIA, an international body that includes AI researchers, sociologists, economists, game theorists, futurists, and political scientists from all member countries. GAIA will also represent all socioeconomic, gender, race, religious, political, and sexual diversities. “[GAIA members] agree to facilitate and cooperate on shared AI initiatives and policies, and over time they exert enough influence and control that an apocalypse—either because of AGI, ASI, or China’s use of AI to oppress citizens—is prevented,” Webb writes. Member nations of GAIA will collaborate to develop AI frameworks, standards, and best practices. Webb describes it as a “a new social contract between citizens and the Big Nine” that is “based on trust and collaboration.” Such a body can bring about the “optimistic scenario,” in which AI is a force for good. Citizens benefit from transparency, standardized protocols, choice of technology, ownership of data. AI complements human cognition, provides predictive care to everyone, fights climate change, finds and filters out misinformation on social media, and more. Under the guidance of GAIA, AI brings all people together. All states, including China, will be invited to join the alliance. If they don’t, their ambitions to extend its state of surveillance will be held in check by a powerful global coalition that uses its technological and economical advantage for the good of all humanity. No government will be able to prey on poor countries to expand its own AI dominion. GAIA will provide a fairer alternative in which no state is forced to trade the wellbeing of their citizens for survival. That sounds easier said than done, but as Webb explains in her book, it is a path that is build one step, one brick, one pebble at a time. The Big Nine will play a crucial part in the future of AI, but we can’t let them do it alone. “Safe, beneficial technology isn’t the result of hope and happenstance. It is the product of courageous leadership and of dedicated, ongoing collaborations,” Webb writes. “The Big Nine are under intense pressure—from Wall Street in the United States and Beijing in China—to fulfill shortsighted expectations, even at great cost to our futures. We must empower and embolden the Big Nine to shift the trajectory of artificial intelligence, because without a groundswell of support from us, they cannot and will not do it on their own.” Source
  5. Software engineers, CEOs and venture capitalists have chosen to jump from the Bay Area to places such as Denver, Miami and Austin, Texas. Some San Francisco residents are fine with it. People behind an "Eviction Free San Francisco" banner block an Apple bus as a Google bus passes on Valencia Street SAN FRANCISCO — When Chirag Bhakta saw a headline recently that said tech workers were fleeing San Francisco, he had a quick reaction: "Good riddance." Bhakta, a San Francisco native and tenant organizer for affordable housing nonprofit Mission Housing, is well-versed in the seismic impact that the growth of the tech industry has had on the city. As software companies expanded over the past decade, they drew thousands of well-off newcomers who bid up rents and remade the city's economy and culture. He said the sudden departure of many tech workers and executives — often to less expensive, rural areas where they can telecommute during the coronavirus pandemic — reveals that their relationship with San Francisco was "transactional" all along. "They used their capital to radically shift the makeup of poor, working-class communities," Bhakta said. "We're left with 'for sale' signs and price points that are still out of reach for most people." Many urban centers have seen residents move out in large numbers since the start of stay-at-home orders in March, but the shift has been especially dramatic for San Francisco, a city that was already experiencing rapid change because of the tech industry. Software engineers, CEOs and venture capitalists have chosen to jump from the Bay Area to places such as Denver, Miami and Austin, Texas, citing housing costs, California's relatively high income tax and the Bay Area's general resistance to rapid growth and change. The scale of the departures is visible in vacant high-end apartments, moth-balled offices and quieter streets in neighborhoods popular with tech workers. And while no one is exactly celebrating, especially as Covid-19 has devastated the incomes of many people, some residents were ready to take a break from the rich. "The gentrification pressure has been at least momentarily relieved," said John Elberling, executive director of Todco, an affordable housing nonprofit that operates in the South of Market neighborhood alongside the shuttered headquarters of countless tech companies and startups. Over the years, San Francisco residents tried a variety of tactics to protest the tech industry's effects on the city: blocking corporate buses, halting expensive new condo buildings, proposing tax increases and even threatening to limit office cafeterias. Affordable housing advocates, local politicians and longtime San Francisco residents hoped the well-off newcomers would contribute more to their new community, or if they didn't, then perhaps leave. Elberling, who before the pandemic spearheaded new restrictions on skyscrapers in San Francisco, is among those who believe the city was being overrun by people who arrived for one reason. "The motivation got to this get-rich-quick attitude," he said. "And that isn't what our city is about. You can make a lot of money here, obviously, but that's not the persona of San Francisco." San Fran Such moves have been described as an "exodus" of tech workers "fleeing" San Francisco — words that Bhakta said amount to a parting insult, as if tech workers had been slaves in Egypt. "'Exodus' is a weighted term for people escaping oppression. From what I can tell, tech workers weren't being oppressed in San Francisco," he said. People who resisted the influx of tech during the most recent mega-expansion for the industry said they do not expect a revival of San Francisco's legendary bohemian past, affordable to nearly anyone. Rents may have fallen 20 percent or more from a year ago, but they're still high by national standards, and many artists left the city a long time ago. Although some companies such as Pinterest have canceled leases, Google is expanding its offices in San Francisco, a sign of the tech industry's attachment to the city despite the local hostility and the predictions of a permanent work-from-home culture. Eventually, the trend of moving out might reverse. "When the trend got out there that you could save a lot of money for a year while you could live elsewhere, people started to pick up on the trend and a lot of them did that," said Janan New, executive director of the San Francisco Apartment Association, a trade group for landlords. But the concerns about tech-fueled gentrification have also been compounded by deeper worries about the impact that tech products are having on elections and democracy. Just because some tech workers have left the area doesn't mean the industry will face less criticism here, even if the protests are no longer over bus fleets. On Thursday, a committee of the San Francisco Board of Supervisors is scheduled to vote on a resolution that would "condemn" the naming of a city hospital after Facebook CEO Mark Zuckerberg and his wife, Priscilla Chan, who gave $75 million to the hospital in 2015. And recently, protesters demonstrated outside the couple's San Francisco house over Facebook's role in politics, including its decision not to fact-check political ads. "No matter where the big tech companies are located, people have really seen that leaving them to regulate themselves has left our society exposed to a rampant amount of disinformation and hatred and conspiracy theories," said Andrea Buffa, an organizer of several Facebook protests. Tracy Rosenberg, executive director of Media Alliance, a San Francisco nonprofit that is often critical of the power of tech companies, said she wonders whether tech workers will want to return to a place where they've received a mixed welcome. "The level of tech blowback in San Francisco and the Bay Area was going up in intensity," she said. "I think there'll be sort of a reluctance to come back and face that, because that was reaching a level that was hard to live with — when you are the cause of all social problems, in the eyes of a significant part of the population, at least." cisco has a history of boom-and-bust cycles, stretching back to the 1849 gold rush and including multiple tech bubbles. But that's not why people stay, Elberling said. "If all you care about is money, I suggest you go to Texas," he said. Some have taken that advice. Joe Lonsdale, a venture capitalist at the firm 8VC, wrote in an op-ed in The Wall Street Journal that California had "fallen into disrepair," with problems ranging from expensive housing and rising property crime to scheduled blackouts needed to prevent wildfires. "The electricity turns on and off, as in Third World countries," he wrote from his new base in Austin. Source
  6. Naspers announced a $100 million Naspers Foundry fund to support South African tech startups. This is part of a $300 million (1.4 billion rand) commitment by the South African media and investment company to support South Africa’s tech sector overall. Naspers Foundry will launch in 2019. The initiatives lend more weight to Naspers’ venture activities in Africa as the company has received greater attention for investments off the continent (namely Europe, India and China). “Naspers Foundry will help talented and ambitious South African technology entrepreneurs to develop and grow their businesses,” said a company release. “Technology innovation is transforming the world,” said Naspers chief executive Bob van Dijk. “The Naspers Foundry aims to both encourage and back South African entrepreneurs to create businesses which ensure South Africa benefits from this technology innovation.” After the $100 million earmarked for the Foundry, Naspers will invest ≈ $200 million over the next three years to “the development of its existing technology businesses, including OLX, Takealot, and Mr D Food…” according to a release. In context, the scale of this announcement is fairly massive for Africa. According to Crunchbase data recently summarized in this TechCrunch feature, the $100 million Naspers Foundry commitment dwarfs any known African corporate venture activity by roughly 95x, when compared to Safaricom’s Spark Venture Fund, Interswitch’s E-Growth Fund, and Standard Bank’s several million dollar commitment to Founder Factory. Naspers is one of the largest companies in the world—85th by its $108 billion market cap, just after Nike—and one of the world’s largest tech investors. Aside from operating notable internet, video, and entertainment platforms, the company has made significant investments in the Europe, India, Asia, and South America. In 2018 Naspers invested $775 million in Germany’s Delivery Hero, $124 million in Brazilian e-commerce company Movile, and added $100 million to its funding to Indian food delivery site Swiggy. Naspers was also an early investor in Chinese tech group Tencent, selling $10 billion in shares this year after a $32 million investment in 2001. The South African media group has invested less in (and been less successful) in Africa, though that comparison comes largely by contrast to Naspers’ robust global activities. One of Naspers early Africa investments, Nigerian e-commerce startup Konga, was sold in a distressed acquisition earlier this year. The company recently added to around $70 million to its commitment to South African e-commerce site Takealot. And in perhaps a preview the company was shifting some focus back to Africa, Naspers made one of the largest acquisitions in Africa this September, buying South Africa’s Webuycars for $94 million. The $300 million commitment to South Africa’s tech ecosystem signals a strong commitment by Naspers to its home market. Naspers wasn’t ready to comment on if or when it could extend this commitment outside of South Africa (TechCrunch did inquire). If Naspers does increase its startup and ecosystem funding to wider Africa— given its size compared to others—that would be a primo development for the continent’s tech sector. Source
  7. The year 2018 saw several tech companies working hard and delivering some of the best biometric authentication tech on devices. Until now we have seen iris scanner, face scanners and optical fingerprint scanners securing content on devices. However, there’s a new biometric tech that is hardly used anywhere – vein authentication. And this too has been hacked. For those unaware, ‘vein authentication’ tech recognises the placement of user's vein under the skin. It also scans the shape and size of the vein under the user’s skin. But then hackers have found a way to bypass this tech as well by simply creating a fake hand out of wax. This was showcased at the annual Chaos Communication Congress hacking conference in Leipzig, Germany by security researchers. “It makes you feel uneasy that the process is praised as a high-security system and then you modify a camera, take some cheap materials and hack it,” said Jan Krissler who were able to ‘hack’ the new authentication system, to the Motherboard website. Although it might sound difficult for a hacker to scan the size, shape and the position of the vein under the skin, the researchers used a converted SLR camera with the infrared filter removed to simply take out the pattern of the veins. It’s enough to take photos from a distance of five meters, and it might work to go to a press conference and take photos of them,” added Krissler. He, along with another researcher on vein authentication, Julian Albrecht, reportedly took over 2500 images over 30 days to get one perfect image that worked with the new biometric process. The image was then used to make a wax model of the hand to bypass the tech. “When we first spoofed the system, I was quite surprised that it was so easy,” Krissler mentioned. Source
  8. Including some members of Fossil’s R&D group Google and watchmaker Fossil Group today announced an agreement for the search giant to acquire some of Fossil’s smartwatch technology and members of the research and development division responsible for creating it. The deal is worth roughly $40 million, and under the current terms Fossil will transfer a “portion” of its R&D team, the portion directly responsible for the intellectual property being sold, over to Google. As a result, Google will now have a dedicated team with hardware experience working internally on its WearOS software platform and potentially on new smartwatch designs as well. “Wearables, built for wellness, simplicity, personalization and helpfulness, have the opportunity to improve lives by bringing users the information and insights they need quickly, at a glance,” Stacey Burr, the president of product management for Google’s WearOS platform, said in a statement. “The addition of Fossil Group’s technology and team to Google demonstrates our commitment to the wearables industry by enabling a diverse portfolio of smartwatches and supporting the ever-evolving needs of the vitality-seeking, on-the-go consumer.” According to Wareable, the technology is a “new product innovation that’s not yet hit the market,” Greg McKelvey, Fossil’s executive vice president of chief strategy and digital officer, told the publication. It’s unclear what exactly that innovation is, or why exactly Google is so eager to buy it, although $40 million is a drop in the bucket for Google when it comes to acquisition costs. What we do know is that it’s somehow based on tech Fossil got its hands on when it acquired wearable maker Misfit for $260 million back in 2015. Burr’s official statement seems to make clear that Fossil was working on some type of health and wellness-focused technology, and Fossil has been Google’s most consistent and long-term hardware partner on WearOS, since back when it was named Android Wear and Google was looking for watchmakers to help it rival Apple in the wearable space. Burr did tell Wareable that Google saw the technology and thought it “could be brought out in a more expansive way if Google had that technology, and was not only able to continue to use it with Fossil but bring it to other partners in the ecosystem,” she said. Burr goes on to say that Fossil will bring the technology to market in the form of a product and it will expand “across our full breadth of brands over time,” before expanding “across the industry over time to benefit all.” Putting aside the cryptic product innovation talk, Fossil has specialized in what are known as hybrid smartwatches: devices that do some minor smart features like step-tracking and notifications, but otherwise look and feel like your standard, semi-expensive wristwatch. The company makes smartwatches with touchscreens that resemble other WearOS devices and the Apple Watch, but its strong suit has always been the hybrid watch, given Fossil’s design and manufacturing experience in the traditional accessories market. The issue there, however, is that Fossil, while making some of the nicest-looking smartwatches, has been slow to adopt technologies like GPS and heart-rate tracking that have existed on other wearables for years. So in this case, Fossil may have cracked something having to do with hybrid watches, but we just don’t know yet. For Google, this could be a big chance for it to turn WearOS around and truly try to compete with the Apple Watch. Whether the Fossil technology pushes Google to finally develop and release an official Pixel Watch with its own internal design, or it simply helps the company better refine its software, this acquisition proves that WearOS still has some fight left in it. Source
  9. The platform was built on the backs of independent creators, but now YouTube is abandoning them for more traditional content Danny Philippou is mad. He’s practically standing on top of his chair as his twin brother and fellow YouTube creator Michael stares on in amusement. Logan Paul, perhaps YouTube’s most notorious character, laughs on the other side of the desk that they’re all sitting around for an episode of his popular podcast Impaulsive. Anyone who’s watched the Philippous’ channel, RackaRacka, won’t be surprised by Danny’s antics. This is how he gets when he’s excited or angry. This time, he’s both. “It’s not fair what they’re doing to us,” Danny yells. “It’s just not fair.” Danny, like many other creators, is proclaiming the death of YouTube — or, at least, the YouTube that they grew up with. That YouTube seemed to welcome the wonderfully weird, innovative, and earnest, instead of turning them away in favor of late-night show clips and music videos. The Philippou twins hover between stunt doubles and actors, with a penchant for the macabre. But YouTube, the platform where they built their audience base, doesn’t seem to want them anymore. A search for the popular video in which the brothers used CGI to re-create Mortal Kombat’s most gruesome “fatalities,” resulted in YouTube surfacing mirrored versions, reuploads, and reaction videos. But the original was nowhere to be found. It was hidden by YouTube for violating the company’s guidelines over excessive violence in a video, and the RackaRacka twins think it’s bullshit. “YouTube calls it a glitch in the system, but what fucking glitch? They tell us that our videos aren’t hidden, but then why aren’t any of our videos surfacing?” Danny says. The Philippous’ story is part of a long-brewing conflict between how creators view YouTube and how YouTube positions itself to advertisers and press. YouTube relies on creators to differentiate itself from streaming services like Netflix and Hulu, it tells creators it wants to promote their original content, and it hosts conferences dedicated to bettering the creator community. Those same creators often feel abandoned and confused about why their videos are buried in search results, don’t appear on the trending page, or are being quietly demonetized. At the same time, YouTube’s pitch decks to advertisers increasingly seem to feature videos from household celebrity names, not creative amateurs. And the creators who have found the most success playing into the platform’s algorithms have all demonstrated profound errors in judgment, turning themselves into cultural villains instead of YouTube’s most cherished assets. As YouTube battles misinformation catastrophes and discovers new ways people are abusing its system, the company is shifting toward more commercial, advertiser-friendly content at a speed its creator community hasn’t seen before. The golden age of YouTube — the YouTube of a million different creators all making enough money to support themselves by creating videos about doing what they love — is over. It starts with creators... and piracy YouTube was founded on the promise of creating a user-generated video platform, but it was something else that helped the site explode in popularity: piracy. When Google bought YouTube in 2006 for $1.6 billion, the platform had to clean up its massive piracy problems. It was far too easy to watch anything and everything on YouTube, and movie studios, television conglomerates, and record labels were seething. Under Google, YouTube had to change. So YouTube’s executives focused on lifting up the very content its founders designed the platform with in mind: original videos. The focus on creator culture defined YouTube culture from its earliest days. The platform was a stage for creators who didn’t quite fit into Hollywood’s restrictions. It allowed people like Jenna Marbles; Felix “PewDiePie” Kjellberg; Anthony Padilla, Ian Hecox, and their channel Smosh; and Lilly Singh to thrive. They were each driven to create a form of entertainment that wasn’t happening elsewhere, and their work was incredibly unique: Marbles riffed on stereotypes of women, Kjellberg became notable for his gaming live streams, Smosh did variety sketches, and Singh did impressions of her Indian Canadian family. Between 2008 and 2011, the volume of videos uploaded to YouTube jumped from 10 hours every minute to 72 hours a minute. By 2011, YouTube had generated more than 1 trillion views; people were watching over 3 billion hours of video every month, and creators were earning real money via Google AdSense — a lot of money. Jenna Marbles was making more than six figures by late 2011. (In 2018, a select group of creators working within YouTube’s top-tier advertising platform would make more than $1 million a month.) By 2012, creators like Kjellberg were leaving school or their jobs to focus on YouTube full-time. He told a Swedish news outlet that he was getting more than 2 million views a month, boasting just over 300,000 subscribers. It was a level of attention completely new to him. “I almost feel unworthy,” Kjellberg said at the time. “It’s too much. I was happy even just having this as my job. But now, when there are so many people that are watching and appreciate it, it’s really a lot of fun.” He, perhaps more than anyone else on the platform, demonstrated what success on YouTube could look like. Between 2011 and 2015, YouTube was a haven for comedians, filmmakers, writers, and performers who were able to make the work they wanted and earn money in the process. It gave birth to an entirely new culture that crossed over into the mainstream: Issa Rae’s Awkward Black Girl series would eventually lead to HBO’s Insecure. Creators like the Rooster Teeth team and Tyler Oakley went on tour to meet fans after generating massive followings online. YouTube had reached mainstream success, but in many ways, it still felt wide open. Anyone could still upload almost anything they wanted without much input from YouTube itself. YouTube’s interests begin to change Behind the scenes, things were changing. YouTube had begun tinkering with its algorithm to increase engagement and experimenting with ways to bring flashier, produced content to the platform to keep up with growing threats like Netflix. In October 2012, YouTube announced that its algorithm had shifted to prefer videos with longer watch times over higher view counts. “This should benefit your channel if your videos drive more viewing time across YouTube,” the company wrote in a blog post to creators. This meant viral videos like “David After Dentist” and “Charlie Bit My Finger,” which defined YouTube in its earliest days, weren’t going to be recommended as much as longer videos that kept people glued to the site. In response, the YouTube community began creating videos that were over 10 minutes in length as a way to try to appease the system. “I’ve figured out ways to monetize and to take advantage of the power of the algorithm,” comedian Cody Ko told Digiday last year. “Obviously, it preferences longer videos, throwing multiple mid-rolls in, which tons of people do now.” Then there was original content. In 2011, YouTube invested $100 million into more than 50 “premium” channels from celebrities and news organizations, betting that adding Hollywood talent and authoritative news sources to the platform would drive up advertising revenue and expand YouTube to an even wider audience. It failed less than two years later, with what appeared to be a clear lesson: talent native to YouTube was far more popular than any big names from the outside. YouTube took that lesson and made YouTube Red. In October 2015, YouTube launched a $9.99-per-month subscription plan that included ad-free viewing and new original series. Unlike YouTube’s last premium initiative, YouTube Red leverage the platform’s homegrown talent — including Singh and Kjellberg — and paired them with professional filmmakers to draw in subscribers. Bringing household celebrity names to YouTube while keeping the faces of the creator community front and center seemed like the best way to make a move into a space dominated by Netflix while staying true to YouTube’s audience. For a while, the creator community, which was thriving, thanks to sponsorship deals and Google’s ad platform, was satisfied. Prank channels like Fouseytube jumped in popularity; gaming became a massive ecosystem; beauty how-tos took off; vlogging went mainstream; unboxing videos became all the rage; toy channels exploded out of nowhere; family videos found an ardent niche; and although graphic sketch videos like the RackaRacka twins’ may not have been what Google showed off to advertisers, it was easy to find on the platform. 2015 was a year when YouTube was at its most vibrant. Then, suddenly, creators started encountering problems on the platform. In 2016, personalities like Philip DeFranco, comedians like Jesse Ridgway, and dozens of other popular creators started noticing that their videos were being demonetized, a term popularized by the community to indicate when something had triggered YouTube’s system to remove advertisements from a video, depriving them of revenue. No one was quite sure why, and it prompted complaints about bigger algorithm changes that appeared to be happening. Kjellberg posted a video detailing how changes had dropped his viewership numbers. He’d been getting 30 percent of his traffic from YouTube’s suggested feed, but after the apparent algorithm update, the number fell to less than 1 percent. Kjellberg jokingly threatened to delete his channel as a result, which was enough to get YouTube to issue a statement denying that anything had changed. (The denial sidestepped questions of the algorithm specifically, and spoke instead to subscriber counts.) These perceived, secretive changes instilled creators with a distrust of the platform. It also led to questions about their own self-worth and whether the energy they were spending on creating and editing videos — sometimes north of 80 hours a week — was worth it. Anthony Padilla, a co-founder of Smosh and one of the first big creators on YouTube, said in a recent video that the changes began to affect his mental health. Initially, Padilla said he saw a clear line between how much work he put into a video, and how many views it brought in. “So I associated that more views meant that my effort paid off,” he said. Then the algorithms appeared to change, and view counts went “all over the place” in ways that he couldn’t make sense of. “I could put hundreds of hours of work into something, and the views could be much lower than I was expecting,” Padilla said. “I would start to equate that with my sense of self-worth.” Padilla wasn’t the only one feeling that way. By the end of 2016, when algorithm changes were creating headaches for some of the platform’s biggest creators, people started announcing they had to take a break from the site they called home. YouTube wasn’t what it was between 2011 and 2016. They didn’t understand it anymore, and trying to keep up with a machine they didn’t recognize and couldn’t rely on was driving people to burn themselves out faster than ever, including creators like Kjellberg. YouTube was exerting more control over what users saw and what videos would make money. Once again, the community would adapt. But how it adapted was far more problematic than anyone would have guessed. Everything is canceled By the beginning of 2017, YouTube was already battling some of its biggest problems in more than a decade. YouTube’s founders didn’t prepare for the onslaught of disturbing and dangerous content that comes from people being able to anonymously share videos without consequence. Add in a moderation team that couldn’t keep up with the 450 hours of video that were being uploaded every minute, and it was a house of cards waiting to fall. YouTube had come under fire in Europe and the United States for letting extremists publish terrorism recruitment videos to its platform and for letting ads run on those videos. In response, YouTube outlined the steps it was taking to remove extremist content, and it told advertisers it would be careful about where their ads were placed. It highlighted many creators as a safe option. But neither YouTube nor Google was prepared for what Felix “PewDiePie” Kjellberg — one of YouTube’s wealthiest independently made creators — would do. In mid-February 2017, The Wall Street Journal discovered an older video from Kjellberg that included him reacting to a sign held up by two kids that said, “Death to all Jews.” The anti-Semitic comment was included in one of his “react” videos about Fiverr, after having pivoted to more of a variety channel instead of focusing just on games. His video, along with reports of ads appearing on terrorist content, led to advertisers abandoning YouTube. Kjellberg was dropped from Disney’s Maker Studios, he lost his YouTube Red series, Scare PewDiePie, and he was removed from his spot in Google Preferred, the top-tier ad platform for YouTube’s most prominent creators. “A lot of people loved the video and a lot of people didn’t, and it’s almost like two generations of people arguing if this is okay or not,” Kjellberg said in an 11-minute video about the situation. “I’m sorry for the words that I used, as I know they offended people, and I admit the joke itself went too far.” The attention Kjellberg brought to YouTube kickstarted the first “adpocalypse,” a term popularized within the creator community that refers to YouTube aggressively demonetizing videos that might be problematic, in an effort to prevent companies from halting their ad spending. Aggressively demonetizing videos would become YouTube’s go-to move. It would affect everyone, and YouTube’s top talent would use the platform to air their grievances. While people understood why Kjellberg’s channel had come under scrutiny, they weren’t happy about the greater effect it had on the ecosystem. Everyone seemed to be paying for Kjellberg’s mistake. The Viner invasion While YouTube grappled with its first adpocalypse, a new generation of stars was flooding the platform. The January 2017 closure of Vine, a platform for looping six-second videos, left a number of creators and influencers without a platform, and many of those stars moved over to YouTube. David Dobrik, Liza Koshy, Lele Pons, Danny Gonzalez, and, of course, Jake and Logan Paul became instant successes on YouTube — even though many of them had started YouTube channels years before their success on Vine. YouTube’s biggest front-facing stars began following in the footsteps of over-the-top, “bro” prank culture. (Think: Jackass but more extreme and hosted by attractive 20-somethings.) Logan Paul pretended to be shot and killed in front of young fans; Jake Paul rode dirt bikes into pools; David Dobrik’s friends jumped out of moving cars. The antics were dangerous, but they caught people’s attention. The Viners quickly amassed tens of millions of subscribers, and they were touted by YouTube as the new faces of the platform in the company’s annual YouTube Rewind video, which is essentially a highlight reel for advertisers. Jake and Logan Paul became the biggest stars of this new wave, performing dangerous stunts, putting shocking footage in their vlogs, and selling merchandise to their young audiences. Although they teetered on the edge of what was acceptable and what wasn’t, they never really crossed the line into creating totally reprehensible content. But other creators saw something in the Paul brothers that left them concerned. Longtime creators like Kjellberg — who was trying to return his channel to normal after dealing with the consequences of his actions — and Ethan Klein called out the Paul brothers for their dangerous and senseless antics. “This is the state of YouTube, guys,” Klein said of the video in another video. “These people aren’t like, ‘How do I make good content?’ It’s more like, ‘How do I make dangerous, risky shit that’s so over the top that people have to click it?’” Their complaints propelled the Pauls even further. The brothers hit back with a series of diss tracks that brought in more than 150 million views, making them bonafide YouTube celebrities who are incredibly wealthy. It didn’t matter that Jake Paul had been evicted for performing YouTube stunts at all hours of the night or that Logan Paul caused a riot at VidCon, a popular YouTube convention, when meeting thousands of his fans. YouTube still promoted them to advertisers and brought them into Google Preferred. Brands like Nike were appearing in Logan Paul videos, even when those videos included children watching a “prank” of their favorite vlogger being shot against a window. It wasn’t a sustainable form of entertainment, and it seemed like everyone understood that except for YouTube. The Paul brothers were on their way to burning out; all it would take was one grand mistake. Even critics of the Pauls, like Kjellberg, empathized with their position. Kjellberg, who faced controversy after controversy, spoke about feeling as though right or wrong ceased to exist when trying to keep up with the YouTube machine. “The problem with being a YouTuber or an online entertainer is that you constantly have to outdo yourself,” Kjellberg said in a 2018 video. “I think a lot of people get swept up in that ... that they have to keep outdoing themselves, and I think it’s a good reflection of what happened with Logan Paul. If you make videos every single day, it’s really tough to keep people interested and keep them coming back.” Still, Logan Paul was small potatoes compared to YouTube’s bigger problems, including disturbing children’s content that had been discovered by The New York Times and more terrorism content surfacing on the site. Who cared about what two brothers from Ohio were doing? The breaking point would be when Logan Paul visited Japan. 2018 was the nail in the coffin Logan Paul’s “suicide forest” video irrevocably changed YouTube. In it, Paul and his friends tour Japan’s Aokigahara forest, where they encountered a man’s body. Based on the video, it appears that he had recently died by suicide. Instead of turning the camera off, Paul walks up to the body. He doesn’t stop there. He zooms in on the man’s hands and pockets. In post-production, Paul blurred the man’s face, but it’s hard to see the video as anything but an egregious gesture of disrespect. Within hours of posting the video, Paul’s name began trending. Actors like Aaron Paul (no relation), influencers like Chrissy Teigen, and prominent YouTubers called out Paul for his atrocious behavior. YouTube reacted with a familiar strategy: it imposed heavy restrictions on its Partner Program (which recognizes creators who can earn ad revenue on their videos), sharply limiting the number of videos that were monetized with ads. In a January 2018 blog post announcing the changes, Robert Kyncl, YouTube’s head of business, said the move would “allow us to significantly improve our ability to identify creators who contribute positively to the community,” adding that “these higher standards will also help us prevent potentially inappropriate videos from monetizing which can hurt revenue for everyone.” Once again, the effects rippled through the community. Smaller creators like Christine Barger, a lifestyle creator who was in the Partner Program, would be kicked out of YouTube’s monetization program following the new sanctions. “I feel stupid for crying because. Honestly, it’s silly; it’s not like it’s millions of dollars. It’s not about the money. It’s about the fact that I’ve been a part of YouTube for a really long time, and I’ve finally tried to be a part of this platform, just to feel like they don’t care about small creators,” she said. The only people who didn’t receive blame were YouTube executives themselves — something that commentators like Philip DeFranco took issue with after the controversy first occurred. “We’re talking about the biggest creator on YouTube posting a video that had over 6 million views, was trending on YouTube, that no doubt had to be flagged by tons of people,” DeFranco said. “The only reason it was taken down is Logan or his team took it down, and YouTube didn’t do a damn thing. Part of the Logan Paul problem is that YouTube is either complicit or ignorant.” or small creators, the following months would be even worse. YouTube faced an escalating crisis of radicalization and sweeping conspiracy theories that had been ignored by executives for years. The company’s first small efforts to address these serious issues — promoting content from musicians, late-night shows, and recommending fewer independent creators — would have huge secondary effects on the middle-tier creators who had once been the heart of the platform during its golden period. It pushed YouTube toward the exact same Hollywood content to which it had once been an alternative. “So what do you do? What happens next?” In 2014, YouTube launched a glossy ad campaign in cities like New York, Los Angeles, and Chicago, touting the success of its homegrown, independent artists. Writers, actors, directors, and comedians like Video Game High School’s Freddie Wong and Matt Arnold, baker Rosanna Pansino, and Epic Rap Battles were highlighted. But by the middle of 2018, lifestyle vloggers like Carrie Crista, who has just under 40,000 subscribers, were proclaiming how the community felt: forgotten. “YouTube seems to have forgotten who made the platform what it is,” Crista told PR Week. In its attempt to compete with Netflix, Hulu, and Amazon, she said, YouTube is “pushing content creators away instead of inviting them to a social platform that encourages them to be creative in a way that other platforms can’t.” Even people outside of YouTube saw what was happening. “YouTube is inevitably heading towards being like television, but they never told their creators this,” Jamie Cohen, a professor of new media at Molloy College, told USA Today in 2018. By promoting videos that meet certain criteria, YouTube tips the scales in favor of organizations or creators — big ones, mostly — that can meet those standards. “Editing, creating thumbnails, it takes time,” Juliana Sabo, a creator with fewer than 1,000 subscribers, said in 2018 after the YouTube Partner Program changes. “You’re just prioritizing a very specific type of person — the type of person that has the time and money to churn out that content.” Individual YouTube creators couldn’t keep up with the pace of YouTube’s algorithm set. But traditional, mainstream outlets could: late-night shows began to dominate YouTube, along with music videos from major labels. The platform now looked the way it had when it started, but with the stamp of Hollywood approval. YouTube executives like Kyncl aren’t trying to hide it, either. At the company’s 2018 upfront in New York City, a presentation put on for advertisers at Radio City Music Hall, top creators were nowhere to be found. Instead, there was the YouTube the company wants advertisers to see: Ariana Grande on Vevo, series from Kevin Hart and Demi Lovato, clips from The Tonight Show Starring Jimmy Fallon. “A year and a half ago, it was all fine,” Danny Philippou says on Logan Paul’s Impaulsive podcast. “We could do whatever we wanted. We could upload whatever we wanted, but every video lately has been getting demonetized and it’s screwed us up. Don’t get us wrong; we still put in 200 percent effort into every video we make, but creatively and financially we’ve been at a bit of a standstill.” In the process of writing this story, YouTube representatives called multiple times to inquire about the piece and request more time in order to respond. Initially, the company declined to comment, including on why RackaRacka’s Mortal Kombat video wasn’t appearing in search. As of this week, the video has started to show up again. After publication, representatives for YouTube said they felt we had not given them enough detail to respond to this story’s core assertions, that creators feel forgotten and that YouTube is shifting toward more traditional content. In an email, a spokesperson provided the following statement: “Our core content strategy and investment remains centered on our endemic creators.” The RackaRacka brothers are tired. “We loved it before when it was like, ‘Oh, you guys are doing something unique and different. Let’s help you guys so you can get views and get eyes on it,’” Danny says. “I’d love to go back to that. We have so many big, awesome ideas that we’d love to do, but there’s no point in doing it on YouTube.” There’s a moment in the podcast, toward the end, when Logan looks at the Philippou brothers and asks them, “So what do you do? What happens next?” They laugh, and then Michael answers without hesitation, with the expediency of someone who’s thought of this for some time. “We leave. We find somewhere else that wants our videos. That used to be YouTube, but it’s not anymore. And I don’t think it ever will be again.” Source
  10. But who knows when that will happen On Wednesday, the Federal Trade Commission settled an investigation into the Google-owned video platform YouTube, resulting in the largest fine ever weighed on a tech company for violating children’s privacy law. But the $170 million in penalties and new restrictions over children’s data, critics argue, do little to incentivize the company to change its behavior. Over the past few months, the FTC has settled a handful of prominent cases in which companies like Facebook, Equifax, and YouTube have mishandled the data of their customers and users. But in each of these cases, consumer advocacy groups and politicians have cried out, asking for the only agency with the authority to protect user privacy, the FTC, to be tougher on these companies. The FTC pulled the curtain back on this practice, but it did not go far enough to put in place critical new rules for accountability,” Sen. Ed Markey (D-MA), who authored the law that Google allegedly violated, said in a statement. “The FTC let Google off the hook with a drop-in-the-bucket fine and a set of new requirements that fall well short of what is needed to turn YouTube into a safe and healthy place for kids.” “We are very disappointed that the Commission failed to penalize Google sufficiently for its ongoing violations of COPPA and failed to hold Google executives personally responsible for the roles they played,” the Center for Digital Democracy’s executive director Jeff Chester said. “A paltry financial penalty . . . sends a signal that if you are a politically powerful corporation, you do not have to fear any serious financial consequences when you break the law.” The Republican commissioners, like Chairman Joe Simons, touted the historic nature of the settlement in an attempt to counter much of the criticism — and they’re not entirely wrong. Yes, it is the largest monetary penalty ever imposed as a result of Children’s Online Privacy Protection Act (COPPA) violations, but what the agency received in relief this week is miles away from the statutory maximum. For violating COPPA, companies can be fine $42,530 per violation, and those fines can be handed out on a per-child and per-day basis. Democratic Commissioner Rohit Chopra called attention to this in his dissenting statement on Wednesday, writing that the agency should have issued a penalty in the billions just to cover the revenue Google made in ill-gotten gains from this behavior over the years. “The terms of the settlement were not even significant enough to make Google issue a warning to its investors,” Chopra wrote. “Google earned – and will continue to earn – enormous sums by illegally tracking kids in many ways.” But in order to receive a larger payout and more significant structural changes from Google, the FTC would have had to take it to court. That kind of case could last years, and the commissioners wouldn’t have any certainty on the amount of relief they would receive in the end. The FTC is a small organization compared to Google. Taking the company to court would trouble the FTC far more than it would Google, a company with billions of dollars in profit and full-time legal staff. Without a federal law that outlines how the FTC should police privacy, the agency is effectively making up the rules as it goes along with its own prior consent decrees and decisions as their baseline for enforcement. In Google’s case, there was a law, COPPA, but it has never been challenged in court. There’s no precedent. So commissioners felt better using their prior COPPA settlements, like the one with TikTok, to guide a deal. For months, FTC officials, including Simons, have been begging Congress for the ability to act in response to initial offenses. In a statement following the agency’s settlement announcement with Facebook in July, Simons said, “I renew my call for Congress to enact federal data security legislation that gives the FTC authority to seek civil penalties for first-time violations.” He continued, “Fortunately, other agencies were able to fill in the gap—this time. But under different circumstances, future breaches might not always be subject to civil penalties, which sends absolutely the wrong signal regarding deterrence.” A clear privacy law like COPPA is something agency officials have been begging lawmakers to write for years. Currently, the FTC generally can’t issue fines or penalties for first-time privacy offenses since there is no federal privacy law. Under COPPA, the FTC can issue these fines for a company’s first offense of the law because once it was approved, it empowered them with the authority to do so, but only when children’s privacy is being mishandled. The Senate Commerce Committee has been working on a privacy bill for months. Last we heard from the chairman, Sen. Roger Wicker (R-MS), legislation was expected to be introduced by Labor Day, but that holiday has come to pass. The House Energy and Commerce Committee is also expected to draft a bill, but little has come of its discussions so far. There are a handful of other privacy measures that have already been introduced, but they’ve found little momentum. But until Congress does something, there will be little consequences for Big Tech when it violates the privacy of its users. YouTube’s fine for allegedly breaking the law may only add up to a few days in profit, but the structural changes implemented by the FTC will hurt the most. Now, YouTube is prohibited from serving children targeted ads, which is the most lucrative form of advertising for them on videos that receive tens of millions of views. Source
  11. Fuchsia may hold the key to the future of Android, Chrome, and everything in between Photo:" Android and Chrome chief Hiroshi Lockheimer speaking at a live recording of The Vergecast at Google I/O 2019 in Mountain View, California. Google Fuchsia remains shrouded in mystery, but the company is slowly beginning to open up about the next-generation operating system, what its purpose is, and what devices it might power. At Google’s I/O developer conference this past week, Android and Chrome chief Hiroshi Lockheimer offered some rare insight into Fuchsia, albeit at a very high level, in front of public audiences. What we do know about Fuchsia is that it’s an open source project, similar to AOSP, but could run all manner of devices, from smart home gadgets to laptops to phones. It’s also known to be built on an all-new, Google-built kernel called “zircon,” formerly known as “magenta,” and not the Linux kernel that forms the foundation of Android and Chrome OS. Beyond that, we don’t know much and have only really seen a brief peek at a prototype Fuchsia-powered user interface two years ago. There have also been reports over the last 12 months or so regarding Google Fuchsia dev tests on the Pixelbook and nebulous plans for a product development timetable that would see an official Fuchsia device released in three to five years. Plus, the Google Home Hub (now called the Nest Hub) is thought to be one of the test devices for Fuchsia. But onstage during a live recording of The Vergecast yesterday, Lockheimer finally opened up about the ultimate goal of Fuchsia. “We’re looking at what a new take on an operating system could be like. And so I know out there people are getting pretty excited saying, ‘Oh this is the new Android,’ or, ‘This is the new Chrome OS,’” Lockheimer said. “Fuchsia is really not about that. Fuchsia is about just pushing the state of the art in terms of operating systems and things that we learn from Fuchsia we can incorporate into other products.” He says the point of the experimental OS is to also experiment with different form factors, a hint toward the possibility that Fuchsia is designed to run on smart home devices, wearables, or possibly even augmented or virtual reality devices. “You know Android works really well on phones and and you know in the context of Chrome OS as a runtime for apps there. But Fuchsia may be optimized for certain other form factors as well. So we’re experimenting.” Lockheimer became somewhat cryptic at the end of his answer, following it up with, “Think about dedicated devices... right now, everybody assumes Fuchsia is for phones. But what if it could be used for other things?” At a separate Android fireside chat held at Google I/O earlier today, Lockheimer provided some additional details, although still similarly cryptic in his specifics. “It’s not just phones and PCs. In the world of [the Internet of Things], there are increasing number of devices that require operating systems and new runtimes and so on. I think there’s a lot of room for multiple operating systems with different strengths and specializations. Fuchsia is one of those things and so, stay tuned,” he told the audience, according to 9to5Google. Source
  12. In my previous post I talked about why consent matters when it comes to privacy; and yet, privacy is only one of the areas where tech companies take advantage of users without their consent. Recently, tech companies have come to a troubling consensus: that they can change your computer, remotely (and often silently) without your knowledge or permission. Some examples of this include: Apple silently disabling a 3rd-party service on Macs because of a security vulnerability even though a patch was already available Apple blocking Facebook and Google from running their internal iOS apps even on their internal company iPhones Microsoft (up until this past May) forcing feature updates to Windows 10 (and often breaking consumer desktops) Dropbox silently updating their software to install a new file manager The classic, irony-filled example of Amazon removing 1984 from Kindle e-book readers Below you will find the origins of this mentality, the risks and harm that arise from it, and what it says about who really owns a computer. Enterprise IT and the origins of “remote control” Anyone who has ever worked for a large company in the computer age has experienced first-hand the authoritarian, controlling, and restrictive policies that IT employs to manage company computers. Starting with centralized systems like Active Directory, IT teams were able to create policies that controlled what sorts of passwords employees could use and whether employees could install applications, access printers, and even, in some cases, insert USB drives. These centralized tools have evolved over the years: they can now add and remove files, install new software and software updates, remotely control machines over the network in order to view what’s on their screens and access local files. This controls extends into Active Management Technology features embedded into the Intel Management Engine, that lets administrators remotely control computers even if they are turned off. Now that smartphones are critical tools in many organizations, MDM (Mobile Device Management) tools are also often employed at enterprises to bring those devices under a similar level of control–with the added benefit of using GPS to track employee phones even outside the office. The most common justification for these policies is convenience. If you are an IT department and have thousands of employees–each with at least one computer and one smartphone that you need to support–one of the ways to make sure that the appropriate software is on the systems, and updates get applied, is to push them from a central location. Companies often have custom in-house software their employees rely on to do their jobs, and throughout the life of the company more tools are added to their toolbox. You can’t expect the IT team to go desk-by-desk installing software by hand when you have thousands of employees working at offices all over the world: when an employee’s computer breaks, these same tools make it easy for IT to replace the computer so the employee can get back to work quickly. The main justification for the strictest–and most controlling–IT policies isn’t convenience, though: it’s security. IT pushes software updates for protection against security bugs. They push anti-virus, anti-malware and remote monitoring tools, to protect both employee and company from dangerous email attachments, from software they might download from their web browser. IT removes local administrative privileges from employees in the name of protecting them from installing malware (and, practically speaking, from installing games and other time-wasting apps). They disable USB storage devices so employees can’t insert disks containing malware or copy off sensitive company documents. Each of these practices have valid reasons behind them for companies facing certain threats. Are users children? Information security professionals spend much of their time solving problems in the enterprise IT space; as a result, they often take on some of the same patronizing views of users you find in IT. Many view themselves as parents and users as children, their role being to wrap the hard corners of the digital world in foam so users don’t hurt themselves. This patronizing view leads them to pick security measures that remove control and autonomy from end users, and centralizes that power in the hands of IT or information security. The repeating refrain is “just trust us” and that users must place full trust in the internal security team, or the third party enterprise security vendor, to be safe. Most users tend to bristle against this kind of security policies–especially as generations are entering the workforce who grew up with computers, and are increasingly savvy and knowledgeable about how to use them. All the same, in the workplace employees have grown accustomed to giving up much of their autonomy, control, and privacy for the sake of the company. Yet you can tell that this approach runs against our nature, because so many companies have had to explain these policies in new hire documents and require that employees agree to, and sign them, when they are hired. These documents inform the employees that the computers they use and the documents they access are company property–and that the company is authorized to monitor and control their property at all times. Remote control spreads to consumer devices You could make a convincing argument that, since companies have paid for, and do own, all of the computers they provide to their employees, and pay IT teams to maintain them, it’s their right to set up software to control them remotely. As draconian and privacy-invading as some corporate policies are, you can still argue that employees consented to this level of control when they signed their employee contract. The problem is that this patronizing, authoritarian approach to enterprise IT has now found its way into consumer devices as well, because it’s in a tech company’s interest to have as much power over their customer as possible. Unlike in the enterprise, though, this remote control is on by default and without explicit consent. More and more tech companies are hiring themselves as their customers’ IT staff, are granting themselves remote control over their customers’ computers, always in the name of convenience and security. The most common form of remote control is that of automatic updates; on the surface, automatic security updates make sense–people can’t be expected to know about all of the security vulnerabilities in all of their software, so it makes sense to make patching easier for them. The problem is that many companies now set this behavior as the default–without user consent–and don’t limit themselves to security updates: instead, they also push other changes they want, including normal feature updates, adding new advertising to the OS, automatically logging users into their Google accounts, and any other change they want on your computer. These updates often have critical bugs themselves, but since they go along for the ride with security updates, people are left with the false choice between security and stability. Because these updates happen behind the scenes, without any prompts or notices for the user, users have little to no control over whether, or when, the updates happen. On phones, this control can also extend to whether a user is allowed to install an application, use it after they installed it, or in the famous example of Google and Huawei being caught up in the US/China trade war, a customer losing the ability to update their phone. Most recently, Adobe has told its customers they could be sued if they don’t upgrade–using older versions of the software they bought apparently being against their licensing agreement! Who owns your computer? The irony is that, decades ago, when your average person had minimal experience with computers, those inexperienced users had much more control and autonomy over them. Many people grew up with computers and smartphones today, and technology is second-nature to them. Many switch between operating systems, laptops and phone vendors as effortlessly as if they were switching between car brands. Yet, at a time when individuals are much more capable of using computers, and computers are simpler to use than ever before, tech companies have decided people can’t be trusted to manage their own devices, that the vendor should have more control than ever before. In the case of enterprise IT, it’s clear that the company owns employee computers and exercises their rightful control over their own property. But what does it mean if a tech company exercises the same kind of control over consumer computers or phones? If hardware vendors have the power to change your computer silently, without your consent–including 3rd party applications you installed yourself–is the computer really yours? If phone vendors decide which applications you can install, can remotely disable applications from running and can stop you from getting updates, is the phone really yours? If software vendors can install major feature changes without your permission, force you to update, even sue you if you don’t update to their latest versions–is the software really yours? The solution is consent The solution to this problem of remote control is pretty simple: consent. While many people in security circles believe the ends justify the means, there are many examples where the same action, leading to the same result, takes on a completely different tone– all depending on whether or not the actor got consent. Some people may be more than happy to make their hardware or software vendor, or the IT department, in charge of their devices, but the vendor should still get permission first. While many vendors will point to their click-through agreements as proof of consent, customers aren’t expected to read (or understand) these agreements, and so they are no more valid a form of consent than a click-through privacy policy. If you have to accept a license agreement before you can use a computer or software, it’s not really consent–it’s an ultimatum. Consent doesn’t need to mean users will be at risk from malware or security bugs; it just means they give permission before a company changes files on their computer. Vendors can add a simple prompt that explains what’s about to happen, so the customer can approve it. The customers that don’t care or that fully trust the vendor will still click Accept regardless; customers that do care retain control over their computer and can investigate and approve the change first. The problem with removing everyone’s power because you assume most people are apathetic, is that many people are apathetic precisely because they feel powerless in the face of Big Tech companies. Source
  13. Hi Guyz, Just Got PES 2014. Am having issues with play this game because of this Any idea how I can increase it?
×
×
  • Create New...