Jump to content

Search the Community

Showing results for tags 'ireland'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station


  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions


  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 6 results

  1. Get cell broadcast tech in, urges onetime Lulzsec white hat Ireland's efforts to keep residents informed about coronavirus has fallen foul of the same basic SMS vulnerability that one of their British neighbours experienced back in March. Lulzsec-bod-turned-security-consultant Jake Davis reckoned the Irish government is using an SMS sender name that is vulnerable to spoofing – a process that is simple and straightforward, not that we're going to explain how it's done. Anyone flying to the Emerald Isle must give their contact details to immigration staff, including a mobile phone number. That number is then sent a text from "gov ie" with details of how to call a doctor and get public health advice if one starts experiencing COVID-19 symptoms. "Fairly standard and responsible stuff," commented Davis. What was not "standard and responsible", in his view, was Ireland's mobile networks not blocking the sender name from being reused by anyone else at all. As he related it: "Now, when Darren [Martyn, a fellow infosec researcher] said to me 'hang on, can you try sending me a cheeky spoofed text from this sender?' my immediate thought was that there's no way this will work using basic SMS tricks. This was the result: One of these two messages was sent by the Irish government, though both appear to come from the same sender Earlier this year the British government briefly flirted with doing the same thing, with Davis blogging at the time (as we reported) that this was a "schoolyard" level of exploit. In Ireland's case, Davis warned the local authorities before disclosing his findings to El Reg and the wider world via his blog. lling for authorities in the UK to invest in mass-message cell broadcast technology ("It's faster, cheaper, and reaches 99 per cent of phones in a secure and reliable fashion"), Davis also urged governments in general to "liaise with known SMS API providers and local mobile carriers beforehand to make them aware of which names/numbers they'll be sending important texts from" and block those sender names and numbers from being used by others. It may surprise some readers to learn that this is not done by default – which is why unscrupulous telemarketing scammers can appear to be calling from phone numbers not actually assigned to them. As we reported in March when UK.gov's first mass-text-messaging campaign began, SMS cell broadcast was trialled in the early 2010s but initial promising trials faltered with no further progress since 2014 [PDF]. Standard anti-phishing advice is not to click links or dial numbers included in unsolicited messages, advice that still stands today despite the desperation of UK government and the NHS to broadcast public health messages using these very techniques. Source
  2. Voters have overwhelmingly supported the proposal to remove blasphemy as an offence from the Constitution. The final result from Friday's referendum shows that 64.85% voted Yes while 35.15% voted No. A total of 951,650 people voted for the change, with 515,808 opposing the move. The Constitution currently holds that blasphemy is a criminal offence and it is defined as such in The Defamation Act of 2009. Following the result of this referendum the word blasphemous will be removed from the Constitution and the Oireachtas will then change the law. Donegal recorded one the highest percentage of No votes at 48.5%. In the capital, Dublin Bay South had the highest number of Yes votes at 76.5%, with Dún Laoghaire voting Yes by 75.4%. The Minister for Justice and Equality, Charlie Flanagan welcomed the result of the referendum. Minister Flanagan said: "It has always been my view that there is no room for a provision such as this in our Constitution. Ireland is rightly proud of our reputation as a modern, liberal society." He said Friday’s vote was another significant step in our Constitutional history. In addition to the removal of the word ‘blasphemous’ from Article 40.6.1 of the Constitution, the Minister will now also move to repeal sections 36 and 37 of the Defamation Act 2009. Nobody has been prosecuted for the offence in Ireland since 1855, in connection with an alleged case of Bible-burning. It was included in anti-defamation legislation passed by the government in 2009. Blasphemy was defined as publishing or uttering something "grossly abusive or insulting in relation to matters sacred by any religion, thereby intentionally causing outrage among a substantial number of adherents of that religion". Anybody found guilty could face a €25,000 fine. British comedian Stephen Fry almost fell foul of the law in 2015 when he gave a television interview during which he was asked what he would say to God. He told RTÉ: "Bone cancer in children, what's that about? "How dare you create a world with such misery ... it's not right, it's utterly, utterly evil. "Why should I respect a capricious, mean-minded stupid God who creates a world so full of injustice and pain?" A viewer complained to the gardaí, which launched an investigation, but Mr Fry was never charged with blasphemy. Source
  3. “People have to clock in and clock out even when going to the toilet and explain the reason why they were delayed, which is embarrassing and humiliating.” DUBLIN — When Valera Zaicev began working in Dublin as one of Facebook’s moderators a couple years ago, he knew he’d be looking at some of the most graphic and violent content on the internet. What he didn’t know was that Facebook would be counting the seconds of his bathroom breaks. “People have to clock in and clock out even when going to the toilet and explain the reason why they were delayed, which is embarrassing and humiliating,” Zaicev told VICE News. Facebook, which outsources the majority of its content moderation to over 15,000 third-party contractors, didn’t always keep those employees on such a tight leash. When Zaicev, 33, joined Facebook’s moderation army in July 2016, he found a professional workplace where he felt he received in-depth training and excellent treatment. But that all soon changed. As the number of moderators in the Dublin offices exploded — rising from 120 to over 800 in two years — the conditions deteriorated and training for moderators all but evaporated. By 2018, the number of content moderators worldwide continued to grow into the tens of thousands, and Facebook began testing a “time management system” designed to monitor every single minute of their day — including lunch breaks, training, “wellness breaks” for counseling or yoga, and even the time they spent on the toilet, according to Zaicev and one current contracted employee, who did not want to be identified. In the past few years, Facebook has tried to fight back against criticism over how it deals with horrific content on the platform. The company has spent at least half a billion dollars hiring human moderators, in addition to the algorithms that already police its pages. “People have to clock in and clock out even when going to the toilet.” Because of growing regulatory oversight from governments around the world, the work these moderators do is essential. They’re the company’s first defense against horrific and illegal material, and yet, they say the company pays them poorly and fails to provide adequate support. The result has been a slew of mental health issues and, last month, the first of many expected lawsuits for failing to provide a safe workplace. Increasingly, Facebook moderators say their every move is monitored. When making decisions about content, for example, moderators have to follow an ever-changing document they call “the bible.” And each moderator is given a “quality score.” “You're allowed four or five mistakes a month — a 2% failure rate, 98% quality score,” said Chris Gray, a former Facebook moderator who worked at the company for 11 months. “So if you come in, and it's Tuesday or Wednesday, and you've got five mistakes, you are fucked for the month, and all you can think about is how to get the point back.” Gray is suffering from PTSD as a result of his time working as a Facebook moderator, and last month he filed a lawsuit in the Irish courts against the social network and CPL, one of a number of companies Facebook works with that employs thousands of human content reviewers in the U.S., Europe, and Asia. The lawsuit accuses the company of causing “psychological trauma” due to poor working conditions and having to view graphic material. Dozens if not hundreds of moderators are expected to file similar lawsuits. A source at Gray’s law firm, Coleman & Partners told VICE News that new documents would be submitted to the High Court in Dublin this month. CPL did not respond to VICE News’ questions about the claims made by current or former moderators, and Facebook declined to comment on making its moderators log every minute of their day. “The well-being of those that review content for Facebook is and will remain our top priority,” Drew Pusateri, a Facebook spokesman, told VICE News. But moderators said that’s simply not the case. While Facebook has made counseling sessions available, many moderators feel they simply can’t take advantage of them, because of the constant monitoring, ever-changing policies, and unrelenting pressure to meet quality standards. “There are thousands of moderators right across the EU, and all of them are working in conditions that are unsafe for their mental health and, in our view, unlawful,” said Cori Crider, the director of Foxglove, a U.K.-based advocacy group assisting in the lawsuits. “European laws protecting workers are strong, and we believe Facebook and the rest of the social media firms need to do far more to create a safe workplace for moderators.” Following “the bible” Facebook moderators review and remove content flagged by billions of Facebook users around the globe as well as the company’s artificial intelligence algorithms. The vast majority of the content is relatively mundane, but some is deeply disturbing, like sex abuse, bestiality, and brutal violence. Every day when moderators clock in, they’re given what’s called a game plan, a schedule that lays out how many pieces of content moderators have to address, and from which queues, including hate speech, harassment, and even threats of self-harm. Gray said CPL also gives moderators an average handling time, or the amount of time they’re allowed to spend on each piece of content. The time differs depending on the type of content being reviewed, but Gray said it was typically under one minute. While some pieces of content are obvious violations and demand little mental energy, others aren’t so clear-cut and require consulting Facebook’s Implementation Standards, what some moderators refer to as “the bible.” The 10,000-word document has 24 different categories, broken down into three groups: harmful behavior, sensitive content, and legal violations, according to a copy published in March 2018 and reviewed by VICE News. According to the rules, moderators can choose to either ignore the content or delete it. If they choose to delete it, they have to describe the content in granular detail and tag it in multiple categories and sub-categories. The information is then fed back into Facebook’s algorithms. That means the moderators are, in effect, training a piece of software that could someday replace them. While “the bible” is a helpful guide, it’s updated every two weeks in vague generalizations to cover the wide gamut of content posted on Facebook. “You've got to have generic rules that can be applied easily by everybody in all kinds of situations,” Gray said. In a lot of situations, moderators simply don’t know what action they should take. "I've had days when there are 10 of us standing around looking at a screen, and someone has got the policy documents open on another screen, and we're looking and we're arguing about how to apply the policy to this video,” Gray said. And in most cases, moderators don’t have the option to escalate the problem to a more senior Facebook employee. No matter what, they have to make a decision, and if it’s wrong, that impacts their overall quality score. Unrealistic quality standards A team of auditors, who review a select sample of moderators’ decisions every month, determine whether or not moderators got a call wrong. But the auditors are just other moderators who happen to have above-average quality scores. Moderators are given one chance to appeal auditors’ decisions to the auditors themselves, and it has to be done within 48 hours — and all the moderators VICE News spoke to said they have to appeal decisions to maintain their quality score of 98%. Auditors mark moderators down for making the wrong decisions and leaving content on Facebook that should have been deleted, or vice versa. But the process also allows auditors to penalize moderators if they’ve taken the right action for the wrong reasons. But auditors are far from infallible and are often unable to articulate why they chose the outcome they did. When determining whether a post that advised a user to “probably kill yourself” should remain online or not, one auditor couldn’t give a definitive answer, according to screenshots of a discussion with a moderator seen by VICE News. “[Auditors] know nothing about your market since they are from different countries and speak different languages,” Zaicev said. But Facebook isn’t just failing to employ adequately skilled staff for the moderation process. The company also allows moderators’ personal details to get into the wrong hands. Zaicev was among more than 1,000 Facebook moderators whose identities were accidentally revealed to the people whose accounts they were blocking. In Zaicev’s case, his information was revealed to members of the Donetsk People's Republic (DPR) and Luhansk People's Republic (LPR), two pro-Russian separatist groups operating in eastern Ukraine. Facebook apologized to Zaicev and promised to better protect its employees — just the latest promise of many the social network has recently made in the wake of multiple scandals. Facebook policies Despite Facebook’s insistence that it’s improved conditions for workers, the new time management policy and the on-going demands of meeting the quality score has further eroded the time employees can use to de-stress after interacting with traumatizing content. Facebook’s new time management tool, which forces moderators to log every minute of their shifts, has only added to the already stressful environment. The tool, which was rolled out to all contractors this year, is so efficient that if a moderator is away from their workstation, it logs the person out. Then they have to explain that gap in production to their managers. The time management system also makes it hard for moderators to actually use the wellness programs that might offset the trauma they have to see on a daily basis. In many cases, instead of going to counseling or yoga, moderators end up spending their time arguing with auditors about overturning a decision or revising the latest version of “the bible.” One current Facebook moderator said those who cover a busy market, such as the English-language region, are “not exactly encouraged” to take advantage of the wellness options, so it can be tough to get through the day without feeling mentally exhausted. And the workload varies widely by market. Moderators in countries like the Philippines and Thailand, for example, have said they review as many as 1,000 pieces of content a day, while a moderator in Europe might see fewer than 150 in more difficult queues, such as child exploitation. Managers aren’t always able to spot when moderators are having problems, whether because of the content they’re seeing, the stress they’re under, or a combination of both. One moderator who worked at CPL for 14 months in 2017 and 2018 told VICE News that he decided to leave the company when a manager sanctioned him while he was having a panic attack at his computer. He’d just found out that his elderly mother, who lived in a different country, had had a stroke and gone missing. “On the day I had the most stress in the world, when I think I might lose my mother, my team leader, a 23-year-old without any previous experience, decided to put more pressure on me by saying that I might lose my job,” said the moderator, who did not want to be identified. Treated “like a criminal” When the stress finally forces moderators to leave the company, many are afraid to speak out against Facebook or CPL because of nondisclosure agreements (NDAs) they sign when they start working there. Zaicev said that when he left, CPL forced him to sign a second NDA. “I refused to sign this and was pressured into doing so,” he said. “After many refusals, I was escorted out of the building like a criminal before my last shift and before saying goodbye to my colleagues.” The company also warns employees not to speak to the press. VICE News saw a copy of an email sent to CPL employees in Dublin in 2018 alerting them to an upcoming undercover report about the company from Channel 4. The note suggested talking points for employees to address any questions they received. “You might like to respond with something like: ‘Safety of the people who use Facebook is one of our top priorities,’” one part of the document reads. Facebook also urged moderators to remove any references to Facebook on their LinkedIn profiles in case journalists contact them about controversies, according to Zaicev. Facebook told VICE News it advises against referencing the company online “for reasons related to safety and security,” citing the shooting at YouTube’s headquarters in San Bruno, California, in April 2018. Despite Facebook’s push for secrecy, Zaicev is now among dozens of current and former moderators who have contacted Coleman Legal Partners in Dublin about bringing legal action against the company for failing to provide a safe work environment. Unlike a recent class action case in the U.S., each moderator has to file a separate case in Ireland. “The happiest people are the people who are away from Facebook. The more unhappy you are in life, the more you are going to spend on Facebook,” one fomer moderator who is preparing a legal action against Facebook told VICE News. “And we spent the whole fricking day on Facebook. We can probably guess that it is not good for you.” Source
  4. DUBLIN (Reuters) - Microsoft’s Linkedin, a social network for professionals, on Thursday said it would add 800 new jobs to its European headquarters in Dublin, the latest technology company to boost its presence in Ireland. The move underscores signs that hiring in Ireland remains robust despite neighboring Britain’s planned departure from the Europe Union and a slowdown in global economic growth. Foreign companies account for around one in 10 jobs among Ireland’s more than two million workers, benefiting from a corporate tax rate of just 12.5%. The European Commission in August 2016 ordered Ireland to recover 13 billion euros from Apple because of an illegal tax deal which gave the company an unfair advantage in breach of the bloc’s state aid rules. But the ruling has done little to slow the flow of multinational jobs into Ireland, with the amount of new roles growing at a record pace last year. Facebook said in January that it would hire 1,000 more people in Dublin this year. U.S. cloud software maker Salesforce has also said it planned to add 1,500 jobs over the next five years, one of the largest job commitments in the 70-year history of the state’s foreign investment agency. Sunnyvale, Calif.-based Linkedin, which has more than 630 million members around the world, said the new employees will be based at a new 150,000 square foot development in central Dublin due for completion toward the end of next year. Source
  5. The Irish government unveiled its ambitious Climate Action Plan on Monday Ireland is set to ban the sales of new petrol and diesel cars by 2030 as part of its ambitious new Climate Action Plan, which was published on Monday. The 150-page Climate Action Plan was unveiled by TD Richard Bruton along with colleagues and Taoiseach [Prime Minister] Leo Varadkar on Monday in Dublin. Among some 180 actions to help Ireland reach its goal of net zero carbon emissions by 2050, Ireland's Climate Action Plan seeks to ban the sale of gas-fueled vehicles in favor of electric vehicles. To support that action, the Irish government is promising to invest in a nationwide charging network to power the new electric vehicles. Additionally, the Irish government said it will stop offering National Car Test (NCT) certificates to gas-powered vehicles by 2045. TD Bruton said on Twitter: “This is a life changing journey and it is a rapid, transformative adjustment that is required. Nothing less will do. We must all now take up the challenge.” A press release highlighted the main takeaways of the new Climate Action Plan: Eliminate non-recyclable plastic and impose higher fees on the production of materials which are difficult to recycle, implement measures to ban single-use plastic plates, cutlery, straws, balloon sticks, and cotton buds; Establish a new Microgeneration Scheme, allowing homeowners to generate their own electricity and sell what they don’t use back to the national grid; Move to 70% renewable electricity by 2030. Currently, only 30% of our electricity comes from renewable sources; Bring 950,000 electric vehicles onto our roads, deliver a nationwide charging network, an electric vehicle scrappage scheme and legislation to ban the sale of petrol / diesel cars from 2030; Expand our network of cycling paths and “Park and Ride” facilities, helping ease congestion; Deliver an intensive program of retrofitting to install 400,000 heat pumps in homes and businesses, replacing the existing carbon-intensive heating systems; Establish a system of 5-year carbon budgets and sector targets, with the relevant Minister responsible for delivering on the target, with penalties if they are not met. These targets will be underpinned by a new Climate Action Act. All major government investments and decisions will be carbon-proofed; Deliver reductions in greenhouse gas emissions in agriculture by creating new, sustainable opportunities for family farms Deliver a new Retrofit Plan to retrofit 500,000 homes, with large groups of houses being retrofitted by the same contractor to reduce costs, smart finance, and easy payback methods; Every public body will be given a climate action mandate by their line Minister to prioritize climate action and new letters of expectation will issue to semi-state bodies on Climate Action. Source
  6. Irish regulator opens third privacy probe into Apple FILE PHOTO: An Apple logo hangs above the entrance to the Apple store on 5th Avenue in the Manhattan borough of New York City, July 21, 2015. REUTERS/Mike Segar/File Photo DUBLIN (Reuters) - Apple’s (AAPL.O) main regulator in the European Union, Ireland’s Data Protection Commissioner (DPC), has opened a third privacy investigation into the iPhone maker over the last few weeks, a spokesman for the DPC said on Tuesday. The probe is examining whether Apple has complied with the relevant provisions of the EU’s new General Data Protection Regulation (GDPR) privacy law in relation to an access request from a customer. It follows investigations opened last year regarding how Apple processes personal data for targeted advertising on its platform and whether its privacy policy on the processing of that data is sufficiently transparent. The Irish DPC has 20 investigations open into multinational technology companies whose European headquarters in Ireland puts them under its watch, with Facebook (FB.O) under the most scrutiny with eight individual probes, plus two into its WhatsApp subsidiary and one into Facebook-owned Instagram. Like Apple, Twitter (TWTR.N) is also under three different investigations with one each for Google’s (GOOGL.O), Microsoft (MSFT.O) owned LinkedIn and U.S. digital advertising company Quantcast. Under the EU’s General Data Protection Regulation (GDPR), regulators have the power to impose fines for violations of up to 4% of a company’s global revenue or 20 million euros ($22 million), whichever is higher. Source: Irish regulator opens third privacy probe into Apple
  • Create New...