Jump to content

Search the Community

Showing results for tags 'algorithms'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station


  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions


  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 5 results

  1. New DoS attack exploits algorithms to knock sites offline The attack sends junk data to algorithms for processing. NicoElNino via Getty Images Distributed Denial of Service (DDoS) attacks have caused their share of online chaos in the past, from being used to target messaging service Telegram during the Hong Kong unrest to crippling emergency communication systems in the US. Now, researchers have described a new vulnerability which could affect sites all over the internet. The exploit was detailed at the Black Hat cybersecurity conference in Las Vegas by Nathan Hauke and David Renardy security company Two Six Labs, as reported by Wired. Rather than a traditional DDoS attack which overwhelms a server by sending thousands of junk traffic requests to it from hundreds of different computers until it fails, the new attack uses a related technique called Denial of Service (DoS). The DoS attack can originate from just one machine and targets the algorithms used by many sites for data processing. The researchers found a common vulnerability across three sets of software, in which they could throw large amounts of data at algorithms which then try to process the data and crash out. This worked for PDF software, by uploading a single large PDF file which could crash a whole website, for virtual networking computers (VNCs) which could be filled with junk data until the servers crashed and for password strength indicating software developed by Dropbox which could be stalled when a user entered thousand-character passwords. In each case, the attacks take advantage of the large amount of processing done by algorithms. If these algorithms are fed enough junk data, they can gum up a website and cause server outages. The researchers say they want to bring awareness of this vulnerability to developers' attention, and they have created a tool called ACsploit which developers can use to generate the "worst-case inputs for algorithms" and test against them. Source: New DoS attack exploits algorithms to knock sites offline
  2. How is it that our brains – the original face recognition program – can recognize somebody we know, even when they’re far away? As in, how do we recognize those we know in spite of their faces appearing to flatten out the further they are from us? Cognitive experts say we do it by learning a face’s configuration – the specific pattern of feature-to-feature measurements. Then, even as our friends’ faces get optically distorted by being closer or further away, our brains employ a mechanism called perceptual constancy that optically “corrects” face shape… At least, it does when we’re already familiar with how far apart our friends’ features are. But according to Dr. Eilidh Noyes, who lectures in Cognitive Psychology at the University of Huddersfield in the UK, the ease of accurately identifying people’s faces – enabled by our image-being-tweaked-in-the-wetware perceptual constancy – falls off when we don’t know somebody. This also means that there’s a serious flaw with facial recognition systems that use what’s called anthropometry: the measurement of facial features from images. Given that the distance between features of a face varies as a result of the camera-to-subject distance, anthropometry just isn’t a reliable method of identification, Dr. Noyes says: In an excerpt of the abstract of a paper published in Cognition magazine– that came out of research done by Noyes and University of York’s Dr. Rob Jenkins on the effect of camera-to-subject distance on face recognition performance – the researchers write that identification of familiar faces was accurate, thanks to perceptual constancy. But the researchers found that changing the distance between a camera and a subject – from 0.32m to 2.70m – impaired perceptual matching of unfamiliar faces, even though the images were presented at the same size. In order to reduce the errors in face-matching that stem from this flaw in anthropometry before migrating to real-world use cases – such as facial recognition being used in passport control or to create national IDs – industry has to take the distance piece of the puzzle into account, she says. Or here’s a thought: perhaps this new finding can be used by lawyers working on behalf of people imprisoned after their faces were matched with those of suspects in grainy, low-quality photos? People like Willie Lynch, who was imprisoned even though an algorithm expressed only one star of confidence that it had generated the correct match? Dr. Noyes said it best: Noyes, a specialist in the field, was one of 20 global academic experts invited to attend a recent conference, at the University of New South Wales, which is home to the Unfamiliar Face Identification Group (UFIG). The conference’s title was Evaluating face recognition expertise: Turning theory into best practice. The University of Huddersfield said that 20 world-leading experts in the science of face recognition assembled in Australia for a workshop and the conference, which were designed to lead to policy recommendations that will aid police, governments, the legal system and border control agencies. Source: Huge flaw found in how facial features are measured from images (via Naked Security by Sophos)
  3. A Council of Citizens Should Regulate Algorithms To ensure our AI-driven future is just and equitable, we should borrow from ancient Athens. Athens’ democracy reminds us that we have been outsourcing governance for two and a half millennia, first to kings, then to experts, and now to machines.Illustration: WIRED Staff; Getty Images Are machine-learning algorithms biased, wrong, and racist? Let citizens decide. Essentially rule-based structures for making decisions, machine-learning algorithms play an increasingly large role in our lives. They suggest what we should read and watch, whom we should date, and whether or not we are detained while awaiting trial. Their promise is huge–they can better detect cancers. But they can also discriminate based on the color of our skin or the zip code we live in. Despite their ubiquity in society, no real structure exists to regulate algorithms' use. We rely on journalists or civil society organizations to serendipitously report when things have gone wrong. In the meantime, the use of algorithms spreads to every corner of our lives and many agencies of our government. In the post-Covid-19 world, the problem is bound to reach colossal proportions. A new report by OpenAI suggests we should create external auditing bodies to evaluate the societal impact of algorithm-based decisions. But the report does not specify what such bodies should look like. We don’t know how to regulate algorithms, because their application to societal problems involves a fundamental incongruity. Algorithms follow logical rules in order to optimize for a given outcome. Public policy is all a matter of trade-offs: optimizing for some groups in society necessarily makes others worse off. Resolving social trade-offs requires that many different voices be heard. This may sound radical, but it is in fact the original lesson of democracy: Citizens should have a say. We don’t know how to regulate algorithms, because we have become shockingly bad at citizen governance. Is citizen governance feasible today? Sure, it is. We know from social scientists that a diverse group of people can make very good decisions. We also know from a number of recent experiments that citizens can be called upon to make decisions on very tough policy issues, including climate change, and even to shape constitutions. Finally, we can draw from the past for inspiration on how to actually build citizen-run institutions. The ancient Athenians—the citizens of the world’s first large-scale experiment in democracy—built an entire society on the principle of citizen governance. One institution stands out for our purposes: the Council of Five Hundred, a deliberative body in charge of all decisionmaking, from war to state finance to entertainment. Every year, 50 citizens from each of the 10 tribes were selected by lot to serve. Selection occurred among those that had not served the year before and had not already served twice. These simple organizational rules facilitated broad participation, knowledge aggregation, and citizen learning. First, because the term was limited and could not be iterated more than twice, over time a broad section of the population—rich and poor, educated and not—participated in decisionmaking. Second, because the council represented the whole population (each tribe integrated three different geographic constituencies), it could draw upon the diverse knowledge of its members. Third, at the end of their mandate, councillors returned home with a body of knowledge about the affairs of their city that they could share with their families, friends, and coworkers, some of whom already served and some who soon would. Certainly, the Athenians did not follow through on their commitment to inclusion. As a result, many people’s voices went unheard, including those of women, foreigners, and slaves. But we don’t need to follow the Athenian example on this front. A citizen council for algorithms modeled on the Athenian example would represent the entire American citizen population. We already do this with juries (although it is possible that, when decisions affect a specific constituency, a better fit with the actual polity might be required). Citizens’ deliberations would be informed by agency self-assessments and algorithmic impact statements for decision systems used by government agencies, and internal auditing reports for industry, as well as reports from investigative journalists and civil society activists, whenever available. Ideally, the council would act as an authoritative body or as an advisory board to an existing regulatory agency. It could evaluate, as OpenAI recommends, a variety of issues including the level of privacy protection, the extent to (and methods by) which the systems were tested for safety, security, or ethical concerns, and the sources of data, labor, and other resources used. Reports like the one OpenAI just released provide an important first step in the process of getting industry buy-in. The report highlights both the risks of unregulated development of the technology and the benefits of an inclusive process to devise regulatory bodies. For example, industry could play a role in the selection process or in the choice of material available to the councillors, or by providing expert advice. The council would be a fair and efficient response to the question of how to resolve the societal trade-offs that algorithms create. Unlike proposed technocratic solutions and traditional auditing structures, the council would expand the range of possible solutions to the problems that algorithms create, enhance democratic accountability, and foster citizen participation and learning. The erosion of commitments to democratic norms and institutions around the world calls for new ideas. The time is ripe for considering creative institutional solutions to tackle some of the greatest challenges society faces. Athens’ democracy reminds us that we have been outsourcing governance for two and a half millennia, first to kings, then to experts, and now to machines. This is an opportunity to reverse the trend. A Council of Citizens Should Regulate Algorithms
  4. It wants automated systems to echo state policies. China is once more tightening its grip on internet content, and this time algorithms are in the spotlight. The Cyberspace Administration of China has published upcoming rules that dictate how internet companies manage content, including a push for recommendation algorithms that promote "positive" ideas (read: government policies) while excluding "bad" material. The measure explicitly forbids content that "endangers national security, leaks state secrets, subverts state power [and] undermines national unity." In other words, internet companies can't challenge the political status quo. The new rules are due to take effect on March 1st, and also call for tighter management of accounts, sign-ups, moderation and "rumors." Governments have lately stepped up attempts to regulate algorithms, although China's approach is very different than that from other countries. A recent US Senate bill aimed to eliminate bias in algorithms, but China is explicitly advocating bias -- it sees this automated code as a vehicle for the party agenda. The country is clearly concerned that recommendation engines could suggest 'dangerous' material that censors might otherwise catch, and it doesn't want to take any chances. Source
  5. Twitter’s Jack Dorsey wants to build an app store for social media algorithms Dorsey expands on Twitter’s vision for a decentralized network Illustration by Alex Castro / The Verge Twitter CEO Jack Dorsey imagines a future where you get to choose what you see on social media by picking out your favorite recommendation algorithm, rather than relying on a single controlling company to get it exactly right. On a call with investors today, Dorsey expanded on his vision of how a decentralized social network might work — and why Twitter would want to create a network that’s beyond the control of itself or any other company. Dorsey said Twitter would benefit by having access to “a much larger corpus of conversation” from which it can surface relevant content to users. “That’s where we will be competitive,” he said. “That is something that not only we can host but we can participate in.” Dorsey said Twitter is “excited to build” features that will give people more choice over what they see. “You can imagine an app-store-like view of ranking algorithms that give people ultimate flexibility in terms of” what posts are put in front of them, Dorsey said on the call. Twitter could both create its own ranking algorithms for people to choose from and offer a “marketplace” where people select different options. “That is something that not only we can host but we can participate in,” he said. Giving people more choice would “not only help out business, but drive more people into participating in social media in the first place,” he said. Dorsey also sees decentralization as a way to “address some of the problems” around Section 230, the law that gives platforms protection from user-created content. The law has recently been a popular target for some legislators, and a decentralized network might offer Twitter a way to avoid issues if stricter rules were to be put in place, whether those rules require more moderation or for Twitter to apply a more neutral approach to what content is surfaced. Twitter has slowly been working on developing a decentralized social network for more than a year. Dorsey announced the project, named Bluesky, in December 2019, but the company is just now trying to hire a project manager — so things are clearly a ways off. TechCrunch published a deep dive last month on where the project is headed, and it described Bluesky as “firmly in the research phase.” Decentralized social networks exist already, though none have had great success (remember Mastodon?). Twitter said the Bluesky team may choose to work with an existing decentralized system, or they may end up building a new one if the team thinks that’s the best approach. Twitter’s Jack Dorsey wants to build an app store for social media algorithms
  • Create New...