Jump to content

Search the Community

Showing results for tags 'cloud'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station


  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions


  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 24 results

  1. The AWS Cloud Map says it will work with S3 buckets. What I'm struggling to understand is how can I setup the AWS S3 SDK to use cloud map namespaces and services when making request to an S3 bucket. Is there a complete example somewhere that covers Setting up and S3 bucket, Adding it to cloud map, using the AWS S3 SDK to access the S3 bucket using cloud map. Why I need this: I'm currently trying to setup dev, stage and prod environments using multiple aws accounts and would like the code to remain the same in the application but use cloud map to map the resources. I understand how I can make this work with database and other urls. Where I'm stuck is when my code is using the AWS SDK to access AWS resources. Pursuing AWS certification
  2. WebDrive Enterprise 12.22 Build 4228 (x86/x64) The File Transfer Client You Already Know How to Use. WebDrive makes accessing and editing files on corporate SFTP, WebDAV and SharePoint Servers as easy as editing any file on your computer. Easily update files on Amazon S3 or other cloud servers. Open, edit, save. It’s that simple. Features: WebDrive connects to many types of web servers, as well as servers in the cloud. You can use WebDrive to access your files on all of the following server types and protocols: FTP, Amazon S3, FTP over SSL, Google Drive, WebDAV, DropBox, WebDAV over SSL, OneDrive, SFTP, OneDrive for Business, SharePoint Homepage: http://www.southrivertech.com/ Download: x32: http://southrivertech.com/software/regsoft/webdrive/v12/webdrive32_12_22_4228_en.exe?utm_campaign=Web Drive Free Trial&utm_medium=Windows32bit&utm_source=Website x64: http://southrivertech.com/software/regsoft/webdrive/v12/webdrive64_12_22_4228_en.exe?utm_campaign=Web Drive Free Trial&utm_medium=Windows64bit&utm_source=website Medicine: Site: http://pastebin.com Sharecode[?]: /y9qeLTsz
  3. Total Backup Recovery Workstation 9.2 -supports Windows 7/8/8.1 Download link: http://ppt.cc/riHt
  4. Like many or all users nowadays, I have too many passwords to remember, since I don't use the same password between services. I was looking for a service to securely store my passwords, and I have read in many sites about LastPass. I'm testing this service right now and it's awesome, really good, but, I can't help thinking how and where my passwords are being stored, who can access them, are the service/servers secure against attacks, if the service goes offline for maintenance or problems, how can I login in my accounts? And many others questions... So, I'm here to listen a word from you guys, services/tools like LastPass, KeePass, 1Password worth it?
  5. Adds 100 percent cashback tier if availability dips below 95 percent, removes reference to POWER8 servers BM has issued a new Cloud Service Description, the formal document that explains what Big Blue considers to be a cloud and how it’s effort will behave. The first item on the list of additions in the email sent to customers is: “a new tier to the Service Level Agreement (SLA) which provides a 100% refund on monthly charges, should a service miss a 95 percent availability target.” 95 percent availability means 36 hours of downtime in a month, which is not what clouds are supposed to do. Could IBM be softening up its users for more incidents like its June 2020 crash that took its entire cloud down for hours, including vital status information pages? Don’t panic, IBM cloud user: this isn’t Big Blue in any way saying that you can expect one-and-a-half nines reliability in its cloud. Instead it’s IBM catching up to rivals like Azure and AWS, which already have 100 percent refund tiers at 95 percent availability and have had them in place for over a year. All also require users to apply for those refunds, which probably isn’t what you’ll really feel like doing after losing a day-and-a-half of uptime! IBM’s new legalese also removes mobile apps from Big Blue’s definition of its cloud UI, leaving it with front-end options of “on-line portals, APIs, command line interfaces, or, where available, assisted ordering”. There’s also a new set of rules for notifications of price or SLA changes: in last year’s document IBM offered only a 30-day warning of “any changes to this Service Description.” This year’s model offers the more nuanced promise of “at at least 30 days' notice of any price increases or changes to this Service Description, and at least 90 days' notice for SLA changes.” A clause on Apple software running on iOS has been excised from the document, as have references to POWER8-powered servers. Presumably IBM has upgraded its POWER-powered cloud to newer processors. Source
  6. JERUSALEM (Reuters) - Intel Corp’s Habana Labs business said on Wednesday it would take time to gain market share from Nvidia in cloud and data centre computing but its deal this week with Amazon Web Services (AWS) was a solid first step. Intel in December bought Israel-based artificial intelligence firm Habana for about $2 billion, seeking to expand its AI portfolio to bolster its data-centre business. Habana’s Gaudi AI training processor, launched in June 2019, has faster processing speeds to compete with similar products from Intel rival Nvidia. “We have to realise that we’re starting from zero and Nvidia is 100%,” said Eitan Medina, Habana’s chief business officer, who said that having AWS as its first customer was very important. “The uphill battle or the process of taking market share has to go through convincing end developers to try it out,” he told reporters. “We are making the dent at the most important place. We’re starting with a very big guy that has the longest experience ... It will take time but I believe we’re on the right path.” Medina declined to comment on whether Habana was negotiating other deals. Habana on Tuesday said its Gaudi processors will power AWS’s Amazon Elastic Compute Cloud “instances” for machine learning workloads, in Habana’s first deal for its Gaudi chips. Amazon is seeing growing demand for its cloud tools during the coronavirus pandemic. These chips, Intel said, would give 40% better price performance than current graphics processing. Medina said that the advantages of Gaudi AI chips were efficiency and allowing for lower capital and operating expenses that in turn could give AWS and others the ability to lower prices for customers for server time. “We are now starting so it will depend on the combination of how we will execute and how important is it for users to lower their cost and to have alternatives to GPUs (graphics processing units),” Medina said. “Our total available market is 100% of AI.” Source
  7. (Reuters) - The COVID-19 pandemic has accelerated cloud adoption by companies by several years, Amazon.com Inc’s cloud head Andy Jassy said on Tuesday. Jassy, who leads Amazon Web Services (AWS), made the remarks at the company’s re:Invent conference, a marketing event for its cloud services. Among several announcements, AWS launched a new chip called Trainium at the event, taking aim at Nvidia Corp’s core business of offering powerful chips to train machine learning algorithms. The chip will complement Amazon’s Inferentia computing chip, which analyses incoming data from platforms like Amazon’s Alexa voice assistant. Source
  8. First solution: run on bigger servers to reduce chatter in the Kinesis fleet Amazon Web Services has revealed that adding capacity to an already complex system was the reason its US-EAST-1 region took an unplanned and rather inconvenient break last week. The short version of the story is that the company’s Kinesis service, which is used directly by customers and underpins other parts of AWS’ own operations, added more capacity. Servers in the Kinesis fleet need to communicate with each other, and to do so create new threads for each of the other servers in the front-end fleet. AWS says there are “many thousands of servers” involved and that when new servers are added it can take up to an hour for news of additions to reach the entire fleet. Adding capacity therefore “caused all of the servers in the fleet to exceed the maximum number of threads allowed by an operating system configuration.” AWS figured that out, but also learned that fixing the problem meant rebooting all of Kinesis. But it was only possible to bring “a few hundred” servers back at a time, and as we’ve seen above Kinesis uses “many thousands of servers”. Which explains why recovery from the outage was slow. The whole sad story is explained in much greater detail in this AWS post, which also explains how it plans to avoid such incidents in future, Plan one: use bigger servers. “In the very short term, we will be moving to larger CPU and memory servers, reducing the total number of servers and, hence, threads required by each server to communicate across the fleet,” the post says, explaining that doing so “will provide significant headroom in thread count used as the total threads each server must maintain is directly proportional to the number of servers in the fleet.” The company also plans new “fine-grained alarming for thread consumption in the service” and plans “an increase in thread count limits in our operating system configuration, which we believe will give us significantly more threads per server and give us significant additional safety margin there as well.” Also on the agenda: isolating in-demand services like CloudFront to uses dedicated Kinesis servers. Dashboard dashed by dependencies The TIFU!-like post also outlines why Amazon’ dashboards offered only scanty info about the incident – because they, too, depend on a service depends on Kinesis. AWS has built a dependency-lite way to get info to the Service Health Dashboard it uses as a public status page. The post says it worked as expected, but “we encountered several delays during the earlier part of the event … as it is a more manual and less familiar tool for our support operators.” The cloud therefore used the Personal Health Dashboard, visible to impacted customers only. The post ends with an apology “While we are proud of our long track record of availability with Amazon Kinesis, we know how critical this service, and the other AWS services that were impacted, are to our customers, their applications and end users, and their businesses.” “We will do everything we can to learn from this event and use it to improve our availability even further. Source
  9. Why not just use Visual Studio Code, which has 5 times as many extensions and is designed for cloud? Microsoft is creating a new extensibility model for Visual Studio, its Windows IDE for coding everything from desktop applications to cloud-hosted microservices. Extensions are a critical part of any IDE, both for third-party integrations and for specialist features that would add bloat if built into the core product. There are 3,890 of them listed in the Visual Studio Marketplace, with the most popular (the official GitHub extension) having 1.8 million downloads. Visual Studio extensions cover every aspect of software development and are installed from a Microsoft-hosted repository A regular source of amusement in the tech news world is seeing companies who once bragged about the wonders of some product or feature quickly turn against it when they have something newer and shinier to offer. So it is with Visual Studio extensions. Despite their popularity, Program Manager Lesley Richardson wrote yesterday that extensions are "difficult to write" and that "inconsistent APIs, overwhelming architecture, and having to ask your teammates how to implement what should be a basic command are common feedback items from extension writers." Worse still, she added: "Today's in-proc extensions have minimal restrictions over how they can influence the IDE and other extensions. Thus, they are free to corrupt Visual Studio if the extension experiences an error or crash." Naturally the company now aims to overcome these problems, and is working on a new extension model that will be out of process, meaning better isolated from the code that runs the IDE itself. Despite this change, Richardson said "the new model will preserve the power and UI customizability options" of the old system. Microsoft also promises to make extensions available "both locally and remotely" so that they run correctly in the cloud-hosted GitHub Codespaces environment. Richardson referred in her post to new asynchronous APIs (not supported in the current extensibility model), which will be better suited to remote development environments. The current Visual Studio architecture uses a framework called VS-MEF on both Windows and Mac The plans are at an early stage and the company has invited developers to complete a survey about their extension preferences. According to the last StackOverflow survey on the matter, it is Visual Studio Code (used by 50.7 per cent of developers) that is the most popular IDE, with Visual Studio in second place at 31.5 per cent. One now rather dated survey is not the whole story, but there is no doubting the wide use of Microsoft's overgrown open-source editor. Note that there are currently 21,755 VS Code extensions in the company's Marketplace, more than five times as many as for Visual Studio, and the most popular, for Python, has been downloaded over 26 million times – figures that make the Windows IDE look like small beer in comparison. For Windows-specific development, though, Visual Studio remains near-essential; Microsoft still needs both, but it may make sense for the company to invest more in the cross-platform product. There is also another one, Visual Studio for the Mac, born of an entirely different codebase, but with which Microsoft is now trying to share code in order to minimise duplication of effort. Both the Windows and Mac versions of Visual Studio currently use a framework called Managed Extensibility Framework (VS-MEF) as well as a shared VS Editor API. Will this be swept away by the new model, or will the new thing build on the old thing under the covers? One of the issues with Visual Studio is there is a ton of legacy to support, and it is likely to be a long time before Microsoft can afford to drop support for the existing extension model, crash-prone or not. Source
  10. Slump in corporate PC demand takes a toll, though cloud business continues to shine Microsoft has come a very long way from depending on Windows. But the ubiquitous operating system can still brighten—or darken—the company’s results. Both effects were actually apparent in the company’s fiscal first-quarter results posted late Tuesday. The cloud business remains the star, and performance there was strong with commercial cloud revenue growing 31% year over year to $15.2 billion, even showing a bit of acceleration from the prior quarter that ended in June. The Windows side of the business was more complicated. Revenue driven by consumer PC purchases saw sharp growth, as did the portion driven by volume licensing of the software. But slumping corporate demand for PCs took a toll. Windows OEM Pro revenue slid 22% year over year—the worst drop since Microsoft began reporting under its new segments in 2015. Microsoft doesn’t break out specifics, but analysts estimate that Windows OEM revenue—reflecting the sale of both consumer and corporate PCs bundled with the operating system—totaled $13.8 billion in Microsoft’s fiscal year that ended in June, according to Visible Alpha. That’s about 10% of total revenue, though Mark Moerdler of Bernstein estimates that the profit contribution is higher given the high-margin nature of that business. Windows also played a big role in Microsoft’s disappointing projection for the December quarter, which helped send the stock down more than 3% on Wednesday. The company said it expects 2% on-year growth for its More Personal Computing segment that includes the Windows OEM business, below the 6% growth Wall Street was targeting. Chief Financial Officer Amy Hood said on the company’s conference call that lower corporate PC demand would continue to be a factor, along with difficult comparisons to the prior year, when stragglers gave sales a boost by finally updating from Windows 7, which had reached Microsoft’s end-of-life stage. As Microsoft continues to shift into a cloud-computing giant, the impact of Windows will continue to lessen. But the operating system still powers more than three-quarters of the world’s computers, according to Statcounter. That makes it still a lucrative business for Microsoft—and at times a high-quality problem. Source
  11. The company's non-GAAP diluted EPS was $1.04 on revenue of $952 million, up 13% year-over-year. Autodesk published its Q3 results on Tuesday, coming out ahead of market expectations. The company's non-GAAP diluted EPS was $1.04 on revenue of $952 million, up 13% year-over-year. Wall Street was looking for earnings of 96 cents per share on revenue of $942.2 million. Autodesk CEO Andrew Anagnost said in prepared remarks that the company is seeing momentum among enterprise customers, noting that Autodesk signed a nine-digit deal in the quarter. "Our strong third quarter results reflect the growing customer value of our cloud-based platform and the resilience of our subscription business model," said Anagnost. "Our enterprise customers are undertaking their own digital transformation and, by enabling that transformation, we are becoming strategic partners. We are confident in our fiscal 2023 targets and expect to see continued double-digit growth thereafter." Recurring revenue represented 97 percent of Autodesk's total Q3 revenue. Meanwhile, total billings decreased 1% to $1.01 billion. For Q4, Autodesk is expecting diluted EPS to range from $1.04 to $1.10, with revenue in the range of $999 million to $1.014 billion. Wall Street is looking for Q3 earnings of $1.06 per share on revenue of $1 billion. Shares of Autodesk were down slightly after hours. Source
  12. Urgent, priority patients can be tested and be told their results within three hours. Waiting 72 hours to receive COVID-19 test results can feel like forever. To speed up that process for urgent, priority patients, eHealth NSW has partnered with Microsoft to use cloud technology and artificial intelligence to reduce the waiting period to three hours. NSW Health Pathology, which has been charged with overseeing COVID-19 testing clinics, has deployed rapid testing for COVID-19 to 35 portable point-of-care devices, so patients can be tested anywhere before their data is uploaded to Azure cloud via 4G connectivity and is reviewed by a clinician -- no matter their location -- in near real-time. "I think with remote monitoring and remote testing you can start to change service models so you can have self-service in communities. That's how you're improving the quality of care, not just the availability of care," NSW Health Pathology CIO James Patterson said. The point-of-care devices have also been used in specific COVID-19 testing situations, such as allowing test data to be collected from crew and passengers on a ship, so that results are instantly uploaded and analysed. At the same, Patterson revealed the same technology was used to underpin the service that has enabled NSW Health Pathology to send NSW residents a SMS if their COVID-19 test results came back negative. Patterson added by being able to ingest data from different sources -- the point-of-care-devices, electronic medical records, such as My Health Records, and data stores in the NSW Health's systems -- analysis and reporting can be completed on a single platform. "The whole volume of our COVID-19 tests is now visible through that secure Power BI platform, covering around 1.7 million patients as at October 2020," he said. Aside from reducing wait times for patients, Patterson said the rapid testing systems have helped healthcare staff avoid over 300,000 hours of manual calls to patients. Off the back of this deployment, NSW Health Pathology has been testing how the point-of-care systems can deliver other medical results faster to patients through trials at Blacktown and Westmead Hospitals. The trial has involved combining point-of-care pathology results with algorithms to help clinicians identify when patients are at risk of a life-threatening condition known as sepsis in the emergency department. "The power of this is really around doing the test and wrapping a service around that," Patterson said. He added there is the potential of speeding up diagnosis and treatment for stroke patients, particularly in aged care homes, for instance. Source
  13. (Reuters) - Amazon.com Inc’s widely used cloud service, Amazon Web Services (AWS) was back up on Thursday following an outage that affected several users ranging from websites to software providers. “We have restored all traffic to Kinesis Data Streams via all endpoints and it is now operating normally,” the company said in a status update. Amazon Kinesis, a part of AWS’ cloud offerings, collects, processes and analyzes real-time data and offers insights. Video-streaming device maker Roku Inc, Adobe’s Spark platform, video-hosting website Flickr and the Baltimore Sun newspaper were among those hit by the outage, according to their posts on Twitter. AWS said it had identified the cause of the outage and taken action to prevent a recurrence, according to the status update. The company did not immediately respond to a Reuters request for comment on the cause of the outage. Source
  14. Nvidia unwraps RTX and T4-based hardware and cloud instances GPU giant unveils new CUDA-X label for its software stack. Nvidia has taken the wraps off its next iteration of workstations for data scientists and users interested in machine learning, with a reference design featuring a pair of Quadro RTX GPUs. Announced at Nvidia GTC on Monday, the dual Quadro RTX 8000 or 6000 GPU design is slated to provide 260 teraflops, and have 96GB of memory available thanks to the use of NVLink. Signed up to provide the new, beefier workstations are Dell, HP, and Lenovo. On the server side, the company unveiled its RTX blade server, which can pack 40 GPUs into an 8U space, and is labelled as a RTX Server Pod when combined with 31 other RTX blade servers. All up, the RTX Server has 1,280 GPUs. The storage and networking backbone of the blade servers are provided by Mellanox -- which Nvidia purchased just shy of $7 billion last week. Speaking during his keynote, CEO Jensen Huang said Pods would be used to support the company's GeForce Now service, to which SoftBank and LG Uplus were announced as members of the GeForce Now Alliance, and its upcoming Omniverse collaboration product that Huang described as Google Docs for movie studios. For Tesla GPUs, T4 GPUs are being offered by Cisco, Dell EMC, Fujitsu, HPE, and Lenovo in machines that have been certified as Nvidia GPU Cloud-ready -- an award Nvidia launched in November that shows "demonstrated ability to excel in a full range of accelerated workloads", and are able to run containers put together by Nvidia for certain workloads. "The rapid adoption of T4 on the world's most popular business servers signals the start of a new modern era in enterprise computing -- one in which GPU acceleration has become standard," Nvidia vice president and general manager of Accelerated Computing Ian Buck said. In the cloud, users of Amazon Web Services (AWS) will soon be able to make use of Nvidia Tesla T4 GPUs with EC2 G4 instances, with general availability slated for the coming weeks, and a preview now open. AWS users will also be able to make use of T4s with Amazon Elastic Container Service for Kubernetes. The cloud giant already has support for Nvidia Tesla V100 on its P3 instances that can support up to 8 GPUs and 64 Intel Xeon CPUs. At the same time, Nvidia is repackaging its software stack and libraries to fall under the CUDA-X moniker, including RAPIDS, cuDNN, cuML, and TensorRT. Finally, Google Cloud ML and Microsoft Azure Machine Learning have integrated RAPIDS, which has been touted as being able to reduce network training times by a factor of 20. Source
  15. With cloud companies open-sourcing their innovations, and enterprises increasing participation, open source sustainability is at an all-time high There has perhaps never been so much angst over whether open source software development is sustainable, and yet there has never been clearer evidence that we’re in the golden age of open source. Or on the cusp. Here and there an open source company might struggle to make a buck, but as a community of communities, open source has never been healthier. There are a few good indicators for this. The clouds have parted The first is that the clouds—yes, all of them—are open sourcing essential building blocks that expose their operations. Google rightly gets credit for moving first on this with projects like Kubernetes and TensorFlow, but the others have followed suit. For example, Microsoft Azure released Azure Functions, which “extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or third-party service as well as on-premises systems.” Azure Functions is a significant open source release, so much so that CNCF executive director Dan Kohn initially assumed that the Azure Functions “SDK is open source, but I don’t think the underlying functions are.” In other words, Kohn assumed the on-ramp to Azure was open source, but not the code that could enable a developer to run serverless setup on bare metal. That assumption, however, was wrong, and Kohn corrected himself: “This is open source and can be run on any environment (including bare metal).” Boom. More recently, AWS released Firecracker, a lightweight, open source virtualization technology for running multi-tenant container workloads that emerged from AWS’ serverless products (Lambda and Fargate). In a textbook example of how open source is supposed to work, Firecracker was derived from the Google-spawned crosvm but then spawned its own upgrade in the form of Weave Ignite, which made Firecracker much easier to manage. These are just a few examples of the interesting open source projects emerging from the public clouds. (Across the ocean, Alibaba has been open sourcing its chip architecture, among other things.) More remains to be done, but these offer hope that the public clouds come not to bury open source, but rather to raise it. Enterprises are making waves Perhaps even more tellingly, mainstream enterprises are also getting religion on open source. Over a decade ago, Red Hat CEO Jim Whitehurst declared an open source emergency of sorts: The vast majority of software written today is written in enterprise and not for resale. And the vast majority of that is never actually used. The waste in IT software development is extraordinary.... Ultimately, for open source to provide value to all of our customers worldwide, we need to get our customers not only as users of open source products but truly engaged in open source and taking part in the development community. Since that declaration, things have gotten better. While it remains true that most enterprises aren’t deeply engaged in the open source development community, that’s changing. In 2017, just 32.7% of developers responding to Stack Overflow’s developer survey said they contribute to open source projects. By 2019, that number had jumped to 65%: The data is somewhat problematic, as the questions asked in the two years were different; in 2017 they didn’t ask how often developers contribute, as Lawrence Hecht has highlighted. Most developers who contribute to open source do so episodically, and less than once per month. Even so, it’s not hard to believe that the more companies get serious about becoming software companies, the more they’re going to encourage their developers to get involved in the open source communities upon which they depend. At the corporate level, such involvement might seem easier for new-school enterprises like Lyft, which are roiling old industries by open sourcing code and data to help foster their disruption. “But of course the new kids are doing that,” you say. Well, it’s not just the upstarts. Old-school enterprises like Home Depot host code on GitHub, while financial services companies like Capital One go even further, sponsoring open source events to help foster community around their proliferating projects. Or for an even more dramatic example of old-school embracing new lessons, consider that the Los Angeles Department of Transportation spawned the Open Mobility Foundation, with open source software designed to help manage the scooters, bikes, drones, rideshare, and autonomous vehicles zipping around cities. So, again, not everybody is doing it. Not yet. But far more organizations are involved in open source today than were back in 2008, when Whitehurst made his plea for greater enterprise involvement. Such involvement is happening both at the elite level (public clouds) and in more mainstream ways, ushering in a golden era of open source. Source:Matt Asay / InfoWorld
  16. Managed services and software optimized for Red Hat OpenShift and Linux aimed at helping enterprises move to the cloud. Image: IBM CEO Ginni Rometty with Red Hat CEO Jim Whitehurst It's only been three weeks since IBM closed its $34 billion takeover of Red Hat, and that was as long as the company was willing to wait until it announced its first joint products with the new subsidiary. According to IBM, it has already "transformed its software portfolio to be cloud-native and optimized it to run on Red Hat OpenShift." The new Cloud Paks are containerized software, specialized by workload and optimized to run on Red Hat's implementation of the open source container application platform OpenShift. They are meant to help enterprises move to the cloud. IBM also announced Red Hat OpenShift on IBM Cloud as a fully managed service and Red Hat OpenShift on IBM Z and LinuxONE for its mainframe customers. In addition, it's offering consulting and technology services for Red Hat, utilizing what it says is "one of the world's largest teams of Red Hat-certified consultants and more than 80,000 cloud application services practitioners" to help its customers move to cloud environments and maintain their cloud infrastructures once the move is made. "Red Hat is unlocking innovation with Linux-based technologies, including containers and Kubernetes, which have become the fundamental building blocks of hybrid cloud environments," Red Hat CEO Jim Whitehurst said in a statement. "This open hybrid cloud foundation is what enables the vision of any app, anywhere, anytime. Combined with IBM's strong industry expertise and supported by a vast ecosystem of passionate developers and partners, customers can create modern apps with the technologies of their choice and the flexibility to deploy in the best environment for the app -- whether that is on-premises or across multiple public clouds." The first five Cloud Paks out of the gate are: Cloud Pak for Data, which the company says will simplify and automate deriving insights from data while providing an open and extensible architecture to virtualize data for AI faster. Cloud Pak for Applications to help businesses modernize, build, deploy, and run applications. Cloud Pak for Integration of apps, data, cloud services, and APIs. Cloud Pak for Automation to help transform business processes, decisions, and content. Cloud Pak for Multicloud Management to provide multicloud visibility, governance, and automation. According to IBM, the Cloud Paks provide a common operating model and common set of services with a unified and intuitive dashboard. "IBM is unleashing its software from the data center to fuel the enterprise workload race to the cloud," Arvind Krishna, IBM's senior VP of cloud and cognitive software, said in a statement. "This will further position IBM the industry leader in the more than one-trillion-dollar hybrid cloud opportunity. We are providing the essential tools enterprises need to make their multi-year journey to cloud on common, open standards that can reach across clouds, across applications and across vendors with Red Hat." All in all, the company says the new software and services draw on more than 100 products from IBM's software portfolio that are optimized for Red Hat OpenShift and Red Hat Enterprise Linux. Source
  17. Phishers are Angling for Your Cloud Providers Many companies are now outsourcing their marketing efforts to cloud-based Customer Relationship Management (CRM) providers. But when accounts at those CRM providers get hacked or phished, the results can be damaging for both the client’s brand and their customers. Here’s a look at a recent CRM-based phishing campaign that targeted customers of Fortune 500 construction equipment vendor United Rentals. Stamford, Ct.-based United Rentals [NYSE:URI] is the world’s largest equipment rental company, with some 18,000 employees and earnings of approximately $4 billion in 2018. On August 21, multiple United Rental customers reported receiving invoice emails with booby-trapped links that led to a malware download for anyone who clicked. While phony invoices are a common malware lure, this particular campaign sent users to a page on United Rentals’ own Web site (unitedrentals.com). A screen shot of the malicious email that spoofed United Rentals. In a notice to customers, the company said the unauthorized messages were not sent by United Rentals. One source who had at least two employees fall for the scheme forwarded KrebsOnSecurity a response from UR’s privacy division, which blamed the incident on a third-party advertising partner. “Based on current knowledge, we believe that an unauthorized party gained access to a vendor platform United Rentals uses in connection with designing and executing email campaigns,” the response read. “The unauthorized party was able to send a phishing email that appears to be from United Rentals through this platform,” the reply continued. “The phishing email contained links to a purported invoice that, if clicked on, could deliver malware to the recipient’s system. While our investigation is continuing, we currently have no reason to believe that there was unauthorized access to the United Rentals systems used by customers, or to any internal United Rentals systems.” United Rentals told KrebsOnSecurity that its investigation so far reveals no compromise of its internal systems. “At this point, we believe this to be an email phishing incident in which an unauthorized third party used a third-party system to generate an email campaign to deliver what we believe to be a banking trojan,” said Dan Higgins, UR’s chief information officer. United Rentals would not name the third party marketing firm thought to be involved, but passive DNS lookups on the UR subdomain referenced in the phishing email (used by UL for marketing since 2014 and visible in the screenshot above as “wVw.unitedrentals.com”) points to Pardot, an email marketing division of cloud CRM giant Salesforce. Companies that use cloud-based CRMs sometimes will dedicate a domain or subdomain they own specifically for use by their CRM provider, allowing the CRM to send emails that appear to come directly from the client’s own domains. However, in such setups the content that gets promoted through the client’s domain is actually hosted on the cloud CRM provider’s systems. Salesforce did not respond to multiple requests for comment. But it seems likely that someone at Pardot with access to United Rental’s account was phished, hacked, or perhaps guilty of password re-use. This attack comes on the heels of another targeted phishing campaign leveraging Pardot that was documented earlier this month by Netskope, a cloud security firm. Netskope’s Ashwin Vamshi said users of cloud CRM platforms have a high level of trust in the software because they view the data and associated links as internal, even though they are hosted in the cloud. “A large number of enterprises provide their vendors and partners access to their CRM for uploading documents such as invoices, purchase orders, etc. (and often these happen as automated workflows),” Vamshi wrote. “The enterprise has no control over the vendor or partner device and, more importantly, over the files being uploaded from them. In many cases, vendor- or partner-uploaded files carry with them a high level of implicit trust.” Cybercriminals increasingly are targeting cloud CRM providers because compromised accounts on these systems can be leveraged to conduct extremely targeted and convincing phishing attacks. According to the most recent stats (PDF) from the Anti-Phishing Working Group, software-as-a-service providers (including CRM and Webmail providers) were the most-targeted industry sector in the first quarter of 2019, accounting for 36 percent of all phishing attacks. Image: APWG Source: Phishers are Angling for Your Cloud Providers (KrebsOnSecurity - Brian Krebs)
  18. In addition to Knative, which is for deploying serverless workloads, Google evidently plans to keep the Kubernetes service mesh, Istio, in-house. It appears that at least one major Kubernetes-related open source project, Knative, isn't destined to join its mother project at the Cloud Native Computing Foundation. Indeed, it's not destined to join any foundation. According to an email sent last week to Knative developers by Donna Malayeri, a project manager at Google and a member of Knative's steering committee, Google won't be relinquishing control of the project anytime soon, if ever. Knative is a promising but not yet ready for prime time Kubernetes-based platform for building, deploying, and managing modern serverless workloads. "Since the start of the Knative project, there have been questions about whether Knative would be donated to a foundation, such as CNCF," Malayeri wrote, indicating in the signature line that she was writing "on behalf of Google." "Google leadership has considered this, and has decided not to donate Knative to any foundation for the foreseeable future." This came as a surprise to many Knative developers and supporters, as it's been presumed by many that Google would eventually contribute Knative to Kubernetes' home at the Linux Foundation's CNCF or to the care of another foundation offering open governance, such as the Apache Software Foundation. That's become the expected route for open source projects, as open governance is seen as an important step for building robust developer communities around projects. Many developers (as well as potential users) are suspicious of projects where a handful of vendors control direction. Which seems to be the direction Knative is headed. At least for the time being, community input will be limited. The project is governed by a steering committee of seven members, four of which are Google employees, with one seat each going to Pivotal (now part of VMware), IBM, and Red Hat. Since any action requires a majority vote, this puts Google completely in the driver's seat. Google's lopsided representation on the steering committee might change, however. In the notice, Malayeri said, "Within the next few weeks, we plan to clarify how project members can attain leadership positions in Knative. As always, our goal is to ensure that Knative serves the needs of our users, the community, and everyone who benefits from using Knative." She also noted that the project remains open source, with no indication that that will change. At this stage in the game, this might be much ado about nothing. Knative isn't yet recommended for production or expected to come out of beta until sometime in the spring. Another Google-held Kubernetes-related project, Istio, wasn't mentioned in the email, but The Register reported that Google intends to hold it close to its chest too. One source closely involved in upstream Kubernetes development told Data Center Knowledge that Istio developers were told in a briefing that the project would remain under Google's control and not be contributed to a foundation. In light of the Knative situation, this would make sense. Istio is a service mesh designed primarily to be used with Kubernetes, and it's also a necessary component for running Knative. Unlike Knative, Istio is out of beta and is already seeing production use. Source
  19. Im using OPPO F7 Android Based on Android 9. There are some apps that are pre-installed, running in Back Ground but due to region Specific App is not visible Or Hidden May be. their icons are not shown in any launcher including default. SO i try to install app but it ended by saying App Not Installed. Its not over riding default app so is there any way to show or make visible this hidden app called Oppo App Market. Regards
  20. By Larry Dignan for Between the Lines Microsoft delivered stellar fiscal second quarter results and has cloud momentum, but we know about as much about Azure sales as we did yesterday. In other words, we know nothing. Microsoft delivered a blowout fiscal second quarter as its commercial cloud hit a $50 billion annual run rate, but in the end know just as little about Azure sales as we did before. In other words, we know nothing about Azure sales so let's put aside the breathless BS until Microsoft gives us some real data. Here's what we know following Microsoft's second quarter: Azure sales were up 62% from a year ago. Growth reaccelerated from 58% in the previous quarter. The nagging question: What's the base for that growth? We don't know. But I do know this: If Azure sales were all that and a bag of AWS market share, we'd know about it. Instead, Azure sales are lumped into commercial cloud, which doesn't align with Microsoft's reporting structure, and consists of Azure, Office 365 business services, Dynamics 365 services, Enterprise Mobility + Security and other Microsoft cloud products. Various estimates for Azure quarterly revenue hover around $4 billion a quarter. At this point, Azure could be a $16 billion a year business or $20 billion. Either way, the bulk of commercial cloud is widely assumed to be Office 365. AWS exited its third quarter of 2019 with an annual run rate of $36 billion by itself with no Office 365, Dynamics or other boosters. At this point, the lack of Azure disclosure merely means that we have an apples and oranges cloud comparison between Amazon and Microsoft. We also have a bunch of guesstimates and storylines that Microsoft is gaining in hybrid cloud. Maybe. Maybe not. Either way, the cloud computing buyer deserves more. Wedbush analyst Daniel Ives noted: Microsoft beat our Azure cloud estimate by 1,000 bps (not a misprint) coming in at 62% seeing an acceleration from the prior quarter as the cloud share shift from Bezos and AWS to Nadella and Azure is happening at an accelerating pace in our opinion. That storyline from Ives is becoming commonplace even if it's not necessarily validated. A few weeks ago, a Goldman Sachs survey of 100 respondents was run with as validation that Azure is taking share from AWS. Amy Hood, CFO at Microsoft, provided some Azure color. A little bit about the reacceleration in the Azure growth rate, let me divide that into its components. We did have a very good and healthy, broad-based consumption growth, especially in IaaS and PaaS...The SaaS component or the per user component also tends to be where you'll get some variability as well. We did have a good SaaS component quarter in addition to the healthy base, and that does result in some movement in that number from quarter-to-quarter. And, I think Microsoft 365 suite, and the momentum we've got in security and management and mobility is a big contributor to that. Umm, thanks. Now don't get me wrong. There was nothing to nit-pick about with Microsoft's second quarter. It was a complete blowout. And there's no shame in being a cloud juggernaut led by Office 365 and SaaS. But if we're really going to have a bake-off and comparison between Azure and AWS let's get some damn data. Right now, AWS provides the data and Microsoft is ducking with Azure. Source
  21. Computer-maker's WebStorage software tied to malware attack from the BlackTech Group. Enlarge Jeremy Brooks / Flickr ASUS' update mechanism has once again been abused to install malware that backdoors PCs, researchers from Eset reported earlier this week. The researchers, who continue to investigate the incident, said they believe the attacks are the result of router-level man-in-the-middle attacks that exploit insecure HTTP connections between end users and ASUS servers, along with incomplete code-signing to validate the authenticity of received files before they're executed. Plead, as the malware is known, is the work of espionage hackers Trend Micro calls the BlackTech Group, which targets government agencies and private organizations in Asia. Last year, the group used legitimate code-signing certificates stolen from router-maker D-Link to cryptographically authenticate itself as trustworthy. Before that, the BlackTech Group used spear-phishing emails and vulnerable routers to serve as command-and-control servers for its malware. Late last month, Eset researchers noticed the BlackTech Group was using a new and unusual method to sneak Plead onto targets’ computers. The backdoor arrived in a file named ASUS Webstorage Upate.exe included in an update from ASUS. An analysis showed infections were being created and executed by AsusWSPanel.exe, which is a legitimate Windows process belonging to, and digitally signed by, ASUS WebStorage. As the name suggests, ASUS WebStorage is a cloud service the computer-maker offers for storing files. Eset published its findings on Tuesday. The abuse of legitimate AsusWSPanel.exe raised the possibility the computer-maker had fallen to yet another supply-chain attack that was hijacking its update process to install backdoors on end-user computers. Eventually, Eset researchers discounted that theory for three reasons: The same suspected update mechanism was also delivering legitimate ASUS WebStorage binaries There was no evidence ASUS WebStorage servers were being used as control servers or served malicious binaries, and The attackers used standalone malware files instead of incorporating their malicious wares inside ASUS’s legitimate software As the researchers considered alternative scenarios, they noted that ASUS WebStorage software is susceptible to man-in-the-middle attacks, in which hackers controlling a connection tamper with the data passing through it. The researchers made this determination because updates are requested and transferred using unencrypted HTTP connections, rather than HTTPS connections that are immune to such exploits. The researchers further noticed that the ASUS software didn’t validate its authenticity before executing. That left open the possibility the BlackTech Group was intercepting ASUS’ update process and using it to push the Plead instead of the legitimate ASUS file. The researchers also observed that most of the organizations that received the Plead file from ASUS WebStorage were using routers made by the same manufacturer. The routers, which Eset declined to identify while it’s still investigating the case, have administrator panels that are Internet accessible. That left open the possibility a MitM attack was being caused by malicious domain name systems settings being made to the routers or something more complex, such as tampering with iptables. Eset’s working theory then shifted from the BlackTech Group breaching ASUS’ network and performing a supply-chain attack to the attackers performing a MitM attack on ASUS’ insecure update mechanism. Indeed, as documented below in a screenshot of a captured communication during a malicious ASUS WebStorage software update, attackers replaced the legitimate ASUS URL with one from a compromised Taiwanese government website. Enlarge / A captured communication during a malicious update of the ASUS WebStorage software. In an email, Eset Senior Malware Researcher Anton Cherepanov said the captured communication isn't proof of a MitM. “It’s possible that attackers gained access to ASUS WebStorage servers and pushed XML with malicious link only to small number of computers," he wrote. That’s why we say it’s still possible. We can’t discount this theory.” But for the reasons listed above, he believes the MitM scenario is more likely. In all, Eset has counted about 20 computers receiving the malicious ASUS update, but that number includes only company customers. “The real number is probably higher if we consider targets that are not our users,” Anton Cherepanov, a senior malware researcher at Eset, told Ars. Once the file is executed, it downloads an image from a different server that contains an encrypted executable file hidden inside. Once decrypted, the malicious executable gets dropped into the Windows Start Menu folder, where it’s loaded each time the user logs in. It’s surprising that even after the serious supply-chain attack estimated to have infected as many as 1 million users, the company was still using unencrypted HTTP connections to deliver updates. Ars sent ASUS media representatives two messages seeking comment for this post. So far they have yet to respond. In a blog post sent over an unencrypted HTTP connection, ASUS reported a "WebStorage security incident" that reads: ASUS Cloud first learned of an incident in late April 2019, when we were contacted by a customer with a security concern. Upon learning of the incident, ASUS Cloud took immediate action to mitigate the attack by shutting down the ASUS WebStorage update server and halting the issuance of all ASUS WebStorage update notifications, thereby effectively stopping the attack. In response to this attack, ASUS Cloud has revamped the host architecture of the update server and has implemented security measures aimed at strengthening data protection. This will prevent similar attacks in the future. Nevertheless, ASUS Cloud strongly recommends that users of ASUS WebStorage services immediately run a complete virus scan to ensure the integrity of your personal data. The post doesn't say what those security measures are. It also makes no mention of Eset's finding that the service was abused to install malware. Until independent security experts say the site is safe to use, people would do well to avoid it. Source: Hackers abuse ASUS cloud service to install backdoor on users’ PCs (Ars Technica)
  22. Yes, there are security ramifications to serverless computing With three in five enterprises now going or planning to go serverless, the attack surface widens. At least one in five organizations, 21%, have implemented serverless computing as part of their cloud-based infrastructure. That's the finding of a recent survey of 108 IT managers conducted by Datamation. Another 39% are planning or considering serverless resources. Photo: Joe McKendrick The question is, will serverless computing soon gain critical mass, used by a majority of enterprises? Along with this, what are the ramifications for security? Existing on-premises systems and applications -- you can call some of them "legacy" -- still require more traditional care and feeding. Even existing cloud-based applications are still structured around the more serverful mode of development and delivery. That's what many enterprises are dealing with now -- loads of traditional applications to manage even while they begin a transition to serverless mode. Again, even if applications or systems are in the cloud, that still is closer to traditional IT than serverless on the continuum, says Marc Feghali, founder and VP product management for Attivo Networks. "Traditional IT architectures use a server infrastructure, that requires managing the systems and services required for an application to function," he says. It doesn't matter if the servers happen to be on-premises or cloud-based. "The application must always be running, and the organization must spin up other instances of the application to handle more load which tends to be resource-intensive." Serverless architecture goes much deeper than traditional cloud arrangements, which are still modeled on the serverful model. Serverless, Feghali says, is more granular, "focusing instead on having the infrastructure provided by a third party, with the organization only providing the code for the applications broken down into functions that are hosted by the third party. This allows the application to scale based on function usage. It's more cost-effective since the third-party charges for how often the application uses the function, instead of having the application running all the time." How should the existing or legacy architecture be phased out? Is it an instant cut over, or should it be a more gradual migration? Feghali urges a gradual migration, paying close attention to security requirements. "There are specific use cases that will still require existing legacy architecture," and serverless computing "is constrained by performance requirements, resource limits, and security concerns," Feghali points out. The advantage serverless offers is that it "excels at reducing costs for compute. That being said, where feasible, one should gradually migrate over to serverless infrastructure to make sure it can handle the application requirements before phasing out the legacy infrastructure." Importantly, a serverless architecture calls for looking at security in new ways, says Feghali, "With the new service or solution, security frameworks need to be evaluated to see what new gaps and risks will present themselves. They will then need to reassess their controls and processes to refine them to address these new risk models." Security protocols and processes differ in a serverless environment. Namely, with the use of serverless computing, an enterprise's attack surface widens. "The attack surface is much larger as attackers can leverage every component of the application as an entry point," Feghali says, which includes "the application layer, code, dependencies, configurations and any cloud resources their application requires to run properly. There is no OS to worry about securing, but there is no way to install endpoint or network-level detection solutions such as antivirus or [intrusion protection or prevention systems[. This lack of visibility allows attackers to remain undetected as they leverage vulnerable functions for their attacks, whether to steal data or compromise certificates, keys, and credentials to access the organization." At this point, introducing the security measures needed to better protect serverless environments may add more cost and overhead, according to a study out of the University of California at Berkeley, led by Eric Jonas. "Serverless computing reshuffles security responsibilities, shifting many of them from the cloud user to the cloud provider without fundamentally changing them," their report states. "However, serverless computing must also grapple with the risks inherent in both application disaggregation multi-tenant resource sharing." One approach to securing serverless is "oblivious algorithms," the UC Berkeley team continues. "The tendency to decompose serverless applications into many small functions exacerbates this security exposure. While the primary security concern is from external attackers, the network patterns can be protected from employees by adopting oblivious algorithms. Unfortunately, these tend to have high overhead." Physical isolation of serverless resources and functions is another approach -- but this, of course, comes with premium pricing from cloud providers. Jonas and his team also see possibilities with generating very rapid instances of serverless functions. "The challenge in providing function-level sandboxing is to maintain a short startup time without caching the execution environments in a way that shares state between repeated function invocations. One possibility would be to locally snapshot the instances so that each function can start from clean state." Feghali's firm, Attivio Networks, focuses on adoption of "deception technologies" intended to provide greater visibility across the various components in a serverless stack, "as a way to understand when security controls are not working as they should, detect attacks that have by-passed them, and for notification of policy violations by insiders, suppliers, or external threat actors." The bottom line is handing over the keys of the server stack to a third-party cloud provider doesn't mean outsourcing security as well. Security needs to remain the enterprise customer's responsibility, because it is they who will need to answer in the event of a breach. Source
  23. Israeli spyware from shadowy NSO has made plenty of headlines this year, most recently back in May when it was exposed as the culprit in a high-profile WhatsApp hack that had enabled nation-states to target specific phones, installing spyware through voice calls on both iPhone and Android devices whether or not a user answered an infected call. That hack was first exposed by the Financial Times, and the same newspaper has continued to investigate, publishing a report today (July 19) that exposes sales claims being made by NSO that "its [Pegasus] technology can surreptitiously scrape all of an individual’s data from the servers of Apple, Google, Facebook, Amazon and Microsoft." NSO has continually denied that it promotes mass-surveillance or unethical hacking, but, according to the FT, "it did not specifically deny that it had developed the capability," described in documents seen by the newspaper. Put simply, the latest revelation suggests that an infected phone will provide NSO's software with the authentication keys for the cloud services—including Google Drive, Facebook Messenger and iCloud—that can be accessed by that device. And given that smartphones have now become the individual entry points into our cloud-based world, the implications of this will raise serious concerns. The FT cites a claim in one of the sales documents that this all happens without "prompting 2-step verification or warning email on a target device." NSO's Pegasus software has been described as the most sophisticated spyware smartphone of its kind and has become a highly-prized export for the Israeli government to help the company market to foreign states. The fact that Israel has been accused of allowing sales of the technology to countries like Saudi Arabia and the UAE carries geopolitical interest given the context and the developing situation in the Middle East. Now, this latest report suggests that compromising data on a phone or using the phone as an eavesdropping endpoint, is not enough. The phone can be hacked to such an extent that it provides the keys to the entire digital kingdom—the cloud-based ecosystem within which it operates. According to the FT, Amazon claimed there was no evidence of such a hack having access to its systems, but assured—as did Facebook—that it would review the claims. Microsoft and Apple responded with assurances around the continually developing security features on their platforms. Google didn't comment. Meanwhile, NSO itself told the newspaper that "we do not provide or market any type of hacking or mass-collection capabilities to any cloud applications, services or infrastructure." But the FT cites an NSO sales pitch, seen by the newspaper and prepared for the Ugandan government, which claimed that "having access to a 'cloud endpoint' means eavesdroppers can reach 'far and above smartphone content', allowing information about a target to 'roll in' from multiple apps and services." Smartphone compromises have been a continual theme this year, with malicious apps lurking in the Google Play Store, the NSO WhatsApp vulnerability, an Android media jacking hack hitting both WhatsApp (again) and Telegram and even the current FaceApp "something from nothing" controversy. State-level hacking, though, is on an entirely different level. The sophistication applied by the governments of China, Russia, Iran and North Korea goes way beyond what is seen in the mass-market and which targets financial information and login credentials and user carelessness in the main. With NSO, there is a productized state-level hack and that is why is causes so much concern. The targets of such hacks are significantly better protected than casual smartphone users. In May, Amnesty International (along with other human rights groups) filed a lawsuit in Israel to revoke NSO's export license. The groups cited allegations that NSO software had been used by oppressive regimes to target human rights activists and journalists—including its use by Saudi Arabia on murdered journalist Jamal Khashoggi. NSO denies that its software played any part in tracking Khashoggi—the company’s CEO Shalev Hulio claimed that "Khashoggi was not targeted by any NSO product or technology, including listening, monitoring, location tracking and intelligence collection." There has always been a risk associated with the integration of cloud platforms and multiple endpoints. And this is it. If I trust a device to access an entire online world, if the device is compromised then so is the security associated with that entire world. The cloud platforms have played down the exposure here. But you can bet that behind the scenes there will be some serious meetings and planning sessions in California and Seattle later today. Source
  24. pCloud.com The popular service pCloud.com is a promising Swiss project that was successfully launched in 2013 and currently (by 2019) has more than 8 million active users. The service was conceived as a safe and reliable storage for both individuals and enterprises. For most people who are looking for cloud storage, the choice comes down to Dropbox, Google Drive or OneDrive. Although these services have their strengths, when it comes to privacy, none of them are trustworthy. On the contrary, pCloud has Swiss roots, does not analyze the data you upload and does not sell information about you to third parties. In addition, this service has an additional tool, pCloud Crypto, which allows you to set up private end-to-end encrypted folders and receive protection that none of the three kings of cloud storage provides. Save the folder in the cryptographic folder, and no one except you will have access to the decryption keys. Another important advantage of pCloud.com is that it works as a virtual hard disk and does not store files locally! Even at a free rate it is very convenient, especially for those who have a computer or little internal memory. And at a paid price you can get a virtual disk of 2 terabytes at an attractive price! This is incredible! In addition, there is the possibility of a monthly payment (in small quantities) and the purchase of a perpetual license. Would you like to get 2 terabytes for life? Under the terms of the service he is at least 99 years old, so, most likely, this will be enough for you Here you can win licenses. Prizes to win: 1st prize: a Premium Plus 2TB account for life (value of 350 EUR) 2nd prize: a Premium Plus account of 500 GB for life (value of 175 EUR) 3rd prize: a Premium Plus 2TB subscription account for one year (value of 95.88 EUR) You can participate with the email address of your free pCloud account or with a social network account. Watch the rules of the game! DURATION OF THE GAME: Start of the contest: 06.06.2019 - 10AM CET End of the contest: 16.06.2019 - 0AM CET Announcement of the winners: 17.06.2019 Enter here
  • Create New...