Jump to content

Search the Community

Showing results for tags 'aws'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Categories

  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions

Categories

  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 12 results

  1. AWS: S3 storage now holds over 100 trillion objects AWS' S3 online storage service turns 15 years old and now stores over 100 trillion objects. Amazon Web Services' (AWS) cloud storage platform S3 or Simple Storage Service today stores over 100 trillion objects. AWS's Jeff Barr revealed the figure to mark S3's fifteen year anniversary. AWS launched S3 publicly on March 14, 2006, four years after Amazon launched Amazon.com Web Services, although that was far from the cloud infrastructure service AWS is today. S3 was AWS' first generally available service that promised developers cheap storage based on storage per month used. Five months later AWS launched Elastic Cloud Compute (EC2), offering developers compute resources as well. S3 has grown tremendously by object count over the past decade. AWS S3 hit one trillion objects in 2012 while Microsoft, which launched Azure in October 2008, had four trillion objects in that year. Barr recalls that AWS started S3's API started with a simple design. "Create a bucket, list all buckets, put an object, get an object, and put an access control list," notes Barr. Barr also says S3 is designed to provide "eleven 9's of durability" meaning that an object stored in S3 has a durability of 99.999999999%. In the 15 years since S3's launch, AWS has introduced a host of new services such as the S3 Glacier Deep Archive, a store for large volumes of data that isn't accessed often, various data replication services, security features, and its Snowmobile shipping container for migrating petabytes of data from on-premise data centers to AWS. Barr notes that AWS recently "dramatically" reduced latency for 0.01% of the Put requests to S3. "While this might seem like a tiny win, it was actually a much bigger one," Barr explains, as it helped avoid customer requests that time out and retry. Another benefit was that gave developers insights need to reduce latency. AWS today remains the largest cloud infrastructure provider with quarterly revenues exceeding $12 billion with a $46 billion annual run rate. It's also become a star performer within Amazon, with former AWS CEO Andy Jassy recently taking over as chief of Amazon from Amazon founder Jeff Bezos. Source: AWS: S3 storage now holds over 100 trillion objects
  2. The AWS Cloud Map says it will work with S3 buckets. What I'm struggling to understand is how can I setup the AWS S3 SDK to use cloud map namespaces and services when making request to an S3 bucket. Is there a complete example somewhere that covers Setting up and S3 bucket, Adding it to cloud map, using the AWS S3 SDK to access the S3 bucket using cloud map. Why I need this: I'm currently trying to setup dev, stage and prod environments using multiple aws accounts and would like the code to remain the same in the application but use cloud map to map the resources. I understand how I can make this work with database and other urls. Where I'm stuck is when my code is using the AWS SDK to access AWS resources. Pursuing AWS certification
  3. JERUSALEM (Reuters) - Intel Corp’s Habana Labs business said on Wednesday it would take time to gain market share from Nvidia in cloud and data centre computing but its deal this week with Amazon Web Services (AWS) was a solid first step. Intel in December bought Israel-based artificial intelligence firm Habana for about $2 billion, seeking to expand its AI portfolio to bolster its data-centre business. Habana’s Gaudi AI training processor, launched in June 2019, has faster processing speeds to compete with similar products from Intel rival Nvidia. “We have to realise that we’re starting from zero and Nvidia is 100%,” said Eitan Medina, Habana’s chief business officer, who said that having AWS as its first customer was very important. “The uphill battle or the process of taking market share has to go through convincing end developers to try it out,” he told reporters. “We are making the dent at the most important place. We’re starting with a very big guy that has the longest experience ... It will take time but I believe we’re on the right path.” Medina declined to comment on whether Habana was negotiating other deals. Habana on Tuesday said its Gaudi processors will power AWS’s Amazon Elastic Compute Cloud “instances” for machine learning workloads, in Habana’s first deal for its Gaudi chips. Amazon is seeing growing demand for its cloud tools during the coronavirus pandemic. These chips, Intel said, would give 40% better price performance than current graphics processing. Medina said that the advantages of Gaudi AI chips were efficiency and allowing for lower capital and operating expenses that in turn could give AWS and others the ability to lower prices for customers for server time. “We are now starting so it will depend on the combination of how we will execute and how important is it for users to lower their cost and to have alternatives to GPUs (graphics processing units),” Medina said. “Our total available market is 100% of AI.” Source
  4. Instances run on Mac minis will give developers a virtual environment on which to design apps for the Mac, iPhone, and other Apple devices. Developers who need to create apps on a Mac can now do so virtually courtesy of Amazon Web Services (AWS). On Monday, Amazon announced the availability of macOS instances on AWS via the Amazon Elastic Compute Cloud (EC2). Run on Mac minis, the new virtual environment will let AWS customers run on-demand instances to build, test, and package apps on macOS. With EC2, developers will be able to design apps for the iPhone, the iPad, the Mac, the Apple Watch, the Apple TV, and Safari using AWS's scalable and pay-as-you-go structure. The EC2 Mac instances will be hosted on an array of Mac mini computers outfitted with Intel's 8th generation 3.2 GHz (4.6 GHz turbo) Core i7 processors, 6 physical cores, 12 logical cores, and 32 GiB (Gibibytes) of memory. Powered by Amazon's AWS Nitro System, the Macs will offer up to 10 Gbps of Virtual Private Cloud network bandwidth and 8 Gbps of Elastic Block Store storage bandwidth through high-speed Thunderbolt 3 connections. With this setup, the Mac minis can operate as fully integrated and managed instances like other Amazon EC2 instances. Available in bare metal instance size (mac1.metal), the EC2 Mac instances support macOS Mojave 10.14 and macOS Catalina 10.15 with support for macOS Big Sur 11.0 coming soon, according to Amazon. AWS customers can connect to these Mac instances through an SSH for Command Line Interface and through active remote screen sharing using a VNC client for a graphical interface. As Apple attracts more developers eager to design apps for its different products, there's a greater need to provide them with the necessary resources. Instead of having to set up and maintain on-premises Mac hardware and software, those developers can turn to the new AWS instances to focus on the apps themselves. Individuals and organizations can access the virtual macOS environments to build and test their apps more quickly. "You can use these instances to create build farms, render farms, and CI/CD farms that target all of the Apple environments," AWS Chief Evangelist Jeff Barr said in a blog post that explains how to use the new environment. "You can provision new instances in minutes, giving you the ability to quickly and cost-effectively build code for multiple targets without having to own and operate your own hardware. You pay only for what you use, and you get to benefit from the elasticity, scalability, security, and reliability provided by EC2." EC2 Mac instances are now available in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia-Pacific (Singapore) AWS Regions as EC2 Dedicated Hosts with a minimum host allocation duration of 24 hours. Customers can purchase the instances on-demand or as part of a savings Plan (one- and three-year plans). AWS users who want to try the new environment can get started at Amazon's EC2 Mac Instances page. Source
  5. (Reuters) - The COVID-19 pandemic has accelerated cloud adoption by companies by several years, Amazon.com Inc’s cloud head Andy Jassy said on Tuesday. Jassy, who leads Amazon Web Services (AWS), made the remarks at the company’s re:Invent conference, a marketing event for its cloud services. Among several announcements, AWS launched a new chip called Trainium at the event, taking aim at Nvidia Corp’s core business of offering powerful chips to train machine learning algorithms. The chip will complement Amazon’s Inferentia computing chip, which analyses incoming data from platforms like Amazon’s Alexa voice assistant. Source
  6. First solution: run on bigger servers to reduce chatter in the Kinesis fleet Amazon Web Services has revealed that adding capacity to an already complex system was the reason its US-EAST-1 region took an unplanned and rather inconvenient break last week. The short version of the story is that the company’s Kinesis service, which is used directly by customers and underpins other parts of AWS’ own operations, added more capacity. Servers in the Kinesis fleet need to communicate with each other, and to do so create new threads for each of the other servers in the front-end fleet. AWS says there are “many thousands of servers” involved and that when new servers are added it can take up to an hour for news of additions to reach the entire fleet. Adding capacity therefore “caused all of the servers in the fleet to exceed the maximum number of threads allowed by an operating system configuration.” AWS figured that out, but also learned that fixing the problem meant rebooting all of Kinesis. But it was only possible to bring “a few hundred” servers back at a time, and as we’ve seen above Kinesis uses “many thousands of servers”. Which explains why recovery from the outage was slow. The whole sad story is explained in much greater detail in this AWS post, which also explains how it plans to avoid such incidents in future, Plan one: use bigger servers. “In the very short term, we will be moving to larger CPU and memory servers, reducing the total number of servers and, hence, threads required by each server to communicate across the fleet,” the post says, explaining that doing so “will provide significant headroom in thread count used as the total threads each server must maintain is directly proportional to the number of servers in the fleet.” The company also plans new “fine-grained alarming for thread consumption in the service” and plans “an increase in thread count limits in our operating system configuration, which we believe will give us significantly more threads per server and give us significant additional safety margin there as well.” Also on the agenda: isolating in-demand services like CloudFront to uses dedicated Kinesis servers. Dashboard dashed by dependencies The TIFU!-like post also outlines why Amazon’ dashboards offered only scanty info about the incident – because they, too, depend on a service depends on Kinesis. AWS has built a dependency-lite way to get info to the Service Health Dashboard it uses as a public status page. The post says it worked as expected, but “we encountered several delays during the earlier part of the event … as it is a more manual and less familiar tool for our support operators.” The cloud therefore used the Personal Health Dashboard, visible to impacted customers only. The post ends with an apology “While we are proud of our long track record of availability with Amazon Kinesis, we know how critical this service, and the other AWS services that were impacted, are to our customers, their applications and end users, and their businesses.” “We will do everything we can to learn from this event and use it to improve our availability even further. Source
  7. It is worried about the potential overstepping that could occur if the government is able to provide assistance to entities in response to significant cyber attacks on Australian systems. The federal government recently closed consultation on a package of reforms focused on protecting critical infrastructure and systems of national significance. With that part of the process wrapped up, the government is now looking to introduce an enhanced regulatory framework, which would build on existing requirements under the Security of Critical Infrastructure Act 2018. This includes: A positive security obligation (PSO) for critical infrastructure entities, supported by sector-specific requirements; enhanced cybersecurity obligations for those entities most important to the nation; and government assistance to entities in response to significant cyber attacks on Australian systems. With the definition of what constitutes critical infrastructure and systems of national significance not yet fully defined, the federal government is seeking to determine who the enhanced framework would apply to, with one proposed sector covering data storage and cloud. Amazon Web Services (AWS) said that while it was broadly supportive of the proposal to expand the regime to include the data and cloud sector, the expansion raises questions such as what service providers should be included in the sector, what security standards should apply, and how the government can prevent over-regulation. In its submission [PDF] to the consultation, the cloud giant also raised concerns that the proposal for government "assistance" or "intervention" powers could give it overly broad powers to issue directions or act autonomously. AWS said the breadth of the newly regulated critical infrastructure sectors, coupled with seemingly broad powers described in the consultation paper [PDF], raised many issues and unknowns. The consultation paper said the government assistance would be provided to entities that are the target or victim of a cyber attack through the establishment of a government capability and authorities to disrupt and respond to threats in an emergency. "Critical infrastructure entities may face situations where there is an imminent cyber threat or incident that could significantly impact Australia's economy, security or sovereignty, and the threat is within their capacity to address. In these cases, we propose that government be able to provide reasonable, proportionate and time-sensitive directions to entities to ensure action is taken to minimise its impact," the government wrote. AWS is concerned that there isn't clarity around whether the triggers for exercising such powers are objective and specific, whether or how the government would be able to objectively assess if its directions or assistance would improve the situation, what an entity could be directed to do or not do, what checks and balances would apply, and whether an entity has rights of review and appeal. Elsewhere in its submission, AWS said it was unclear from the consultation paper whether and how the enhanced regulatory framework would apply, explaining that it was concerned the position of applying the enhanced regulatory framework at the "owner and operator level, not at [a] specific piece of technology" could lead to negative consequences. Instead, the cloud giant has recommended the enhanced regulatory framework only apply to specific critical infrastructure assets of a critical infrastructure entity. In order to avoid over-regulation, AWS said a technology service provider -- that is also a regulated critical infrastructure entity complying with its own sector PSO -- should not have to comply with additional security obligations imposed by another regulator that duplicates or builds upon that entity's PSO. It also wants clarification that entities will not be inspected, examined, or audited against the same requirements by multiple regulators. Acknowledging each sector is different, AWS said PSOs for one sector should not contradict or conflict with those in another sector, but it was concerned this approach could lead to a fragmented set of security requirements across different sectors. Asking for further clarity, AWS wants an appropriate scope of what entities and infrastructure are included in the "data and the cloud" sector. If there was to be a threshold, the cloud giant has suggested a test of "a data centre containing IT equipment capable of consuming more than 100kW of power in total" so that operators of infrastructure have clarity on whether they are covered. In addition, AWS said the PSO should reflect that an entity is only able to implement security processes that are within its control. Source
  8. No diggity, NoSQL Open-source NoSQL in-memory database Redis is the most popular database in the world if, that is, your world is solely within Amazon Web Services. The knock to the egos of the database big boys has come from Sumo Logic, which provides monitoring software mostly focused on AWS, and can be explained by some of the peculiarities of modern application architecture continuing to gain traction in the cloud. According to Sumo Logic's research of 2,100 companies that it monitors, Redis is the number one database in AWS – with 28 per cent market share. MySQL came in second with 23 per cent and PostGreSQL third with 20 per cent. Oracle and Microsoft SQL Server were left languishing at joint sixth place with 10 per cent each. Compare these findings to the big bad world outside AWS and the contrast is stark. According to DBengines, Oracle, MySQL, and Microsoft SQL Server hog the top three spots respectively, a ranking that has not changed in six years. The database rating service, with measures based on website mentions, job ads, search queries, FAQs and social media, ranked Redis down in eighth place. Redis's popularity as a database, a cache, and a message broker in modern cloud architecture provides an explanation for its position in the AWS ranking but not in the wider world. But that is a perception that Redis, the company behind the open-source database, has gone some way to dispel. Speaking to The Register last year, Redis Labs CTO and co-founder Yiftach Shoolman said the company wants to continue changing the perception of Redis from being a cache to "a real database. I think that our production numbers show it already. Redis can be extremely reliable." To this end, Redis this year promoted its database in real-time analytics as an "AI serving platform" for use in fraud detection and IoT monitoring. Things are getting complex The Sumo research also shows how those developing and rolling out applications face increasing complexity in infrastructure. The median number of AWS services used by enterprises grew from 15 last year to 26 services in 2020. Growth in service adoption shows organisations moving beyond Amazon's core storage, database, and network services. AWS Security Token Service, AWS Key Management Service, AWS Identity Access Management, and AWS Lambda all made the top 10 AWS services used. Meanwhile, 47 per cent of organisations have two or more AWS regions and 43 per cent told Sumo Logic that they have two or more AWS accounts to deal with. Over 40 per cent of AWS customers are using container orchestration and around 85 per cent are using Kubernetes to build and operate applications across multiple clouds. Serverless architecture is continuing to grow rapidly: AWS Lambda adoption is at 37 per cent, up from 24 per cent in 2017. Sumo said the increase in interest in modern cloud-based and/or serverless architecture was accelerating due to the spread of COVID-19 and the need for businesses to operate remotely and demonstrate greater flexibility. "Many industries around the world are adopting digital-first models during the shelter-in-place order. Over the last 12 months, we see continued growth in the modern application stack, comprising IaaS services, microservice technologies and modern application components. And, multi-cloud adoption has accelerated with 70 per cent growth over the last 12 months as more companies adjust to support remote workforces," the report said. All of which means that it is not just online shopping that is ensuring Amazon CEO Jeff Bezos is having a good pandemic. Source
  9. steven36

    Why AWS leads in the cloud

    The offshoot of Amazon’s online bookstore has led the public cloud market for a decade. How did it get there? Will its dominance continue? The rumors of Amazon Web Services’ fall from the pinnacle were premature. In the push to democratize cloud computing services, AWS had the jump on everyone from the beginning, ever since it was spun out of the mega retailer Amazon in 2002 and launched the flagship S3 storage and EC2 compute products in 2006. It still does. AWS quickly grew into a company that fundamentally transformed the IT industry and carved out a market-leading position, and has maintained that lead — most recently pegged by Synergy Research at almost double the market share of its nearest rival Microsoft Azure, with 33 percent of the market to Microsoft’s 18 percent. Market tracker data from IDC for the second half of 2019 also puts AWS in a clear lead, with 13.2 percent of the public cloud services market, narrowly ahead of Microsoft with 11.7 percent. As with any business, Amazon’s cloud success comes down to a confluence of factors: good timing, solid technology, and a parent company with deep enough pockets to make aggressive capital investments early on. There are other, unique factors that have led to the success of AWS, however, including a relentless customer focus, a ruthless competitive streak, and continued commitment to “dogfooding,” or eating your own dog food — a perhaps unfortunate turn of phrase that has proliferated through the tech industry since the late eighties. Dogfooding refers to a company making a bet on its own technology — in Amazon’s case by making it publicly available as a product or service. This is what Amazon did with S3 and EC2 in 2006, and it’s what Amazon has been doing with almost all of its AWS product launches since. We asked the experts how AWS has been able to dominate the public cloud market to date, and, with worldwide adoption of cloud services due to continue climbing, according to the 2020 IDG Cloud Computing Survey, whether AWS can stay on top of the pile for years to come. First-mover advantage There is no escaping the fact that Amazon’s jump on the competition has put them in the ascendancy from day one, giving them a six-year head start over its nearest competitor, Microsoft Azure. These years didn’t just help position AWS as the dominant cloud computing service provider in people’s minds, it also furnished the company with years of feedback to crunch through and better serve its customer base of software developers, engineers, and architects. “They invented the market space, there wasn’t the concept of public cloud like this before,” Dave Bartoletti, vice president and principal analyst at Forrester said. “We have been renting computing services for 30 or 40 years. Really what AWS did was establish in a corporate environment for a developer or IT person to go to an external service and start a server with a credit card and do computing somewhere else.” As Bartoletti notes, AWS wasn’t just first to market, it also had the deep pockets of its parent company, allowing it to blow anyone else out of the water. “They outspent their rivals,” he bluntly assessed. That being said, not all first-movers lead their market as definitively as AWS is — just ask the founders of Netscape. “Early movers don’t always have an advantage,” Deepak Mohan, research director for cloud infrastructure services at IDC, said, noting that AWS was especially rigorous in creating and bringing products to market. “Being a high-quality company and delivering a high-quality product and being responsive to customer needs all play equally important parts.” A special relationship Mohan points to Amazon’s superior ability to “eat its own dog food” as a key driver towards its success, as the cloud division had to address significant technology challenges faced by the huge ramping up of scale Amazon was seeing in the aftermath of the dotcom bubble bursting. “You have to consider the relationship between AWS and Amazon the e-commerce company,” said Ed Anderson, distinguished VP analyst at Gartner — which has AWS as its clear leader in its latest Magic Quadrant for Cloud Infrastructure and Platform Services. Just as customers of Google Cloud today want to “run like Google,” early AWS customers wanted to leverage the technology that had enabled Amazon to grow into an e-commerce giant so quickly. “A hallmark of AWS has been how technical and capable it has been,” Anderson notes. “And being really oriented around that ‘builder’ audience of developers, implementers, and architects,” he adds. “As a consequence, the sales team is very technical and capable in having those conversations, which means the experience customers have is really smooth.” Customer obsession It is that attention to customer needs that has long been a hallmark of the AWS value proposition, even if they don’t always get it right. As Amazon founder and CEO Jeff Bezos wrote in a 2016 letter to shareholders: “Customers are always beautifully, wonderfully dissatisfied, even when they report being happy and business is great. Even when they don’t yet know it, customers want something better, and your desire to delight customers will drive you to invent on their behalf.” It is this attention to what customers want — and don’t yet know what they want, to paraphrase Steve Jobs, by way of Henry Ford — which has been codified in Amazon’s leadership principles. “Leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust. Although leaders pay attention to competitors, they obsess over customers,” Amazon’s leadership principles state. “That is a value I see exhibited over and over at AWS,” Anderson at Gartner observes. “This attention to customer requirements and the needs of builders and developers and architects, that has prioritized the features they built and is tightly aligned.” “They are incredibly customer focused and everything they build is driven by the customer,” Bartoletti at Forrester adds.” To maintain that as their large pool of customers continues growing gives them the advantage of knowing what their customers want.” Take the 2019 release of the hybrid cloud product AWS Outposts as an example. Instead of squaring neatly with Amazon’s public cloud-centric view of the world, Outposts met the customer’s needs in a different sphere — their on-prem data centers. Everything services-first A key move made by Bezos in the early days of commercial cloud computing was formalizing the way AWS would build and expose products to its customers. Referencing an early-2000s internal email mandate from Bezos, former Amazon and Google engineer Steve Yegee paraphrased in his Google Platforms Rant, from 2011, that: “All teams will henceforth expose their data and functionality through service interfaces. Teams must communicate with each other through these interfaces.” Lastly, “Anyone who doesn’t do this will be fired,” Yegge added. With this mandate, Bezos spurred the creation of an enormous service-oriented architecture, with business logic and data accessible only through application programming interfaces (APIs). “From the time Bezos issued his edict through the time I left [in 2005], Amazon had transformed culturally into a company that thinks about everything in a services-first fashion. It is now fundamental to how they approach all designs, including internal designs for stuff that might never see the light of day externally,” Yegge wrote. The enormous service-oriented architecture had effectively transformed an infrastructure for selling books into an extensible, programmable computing platform. The online bookstore had become a cloud. The everything store for enterprise builders All of this has led to an unrivalled breadth and maturity of services available to AWS customers. And while Amazon had the jump on the competition, it hasn’t rested on its laurels, regularly pioneering new services in the public cloud, such as the cloud-based data warehouse Redshift, the high-performance relational database service Aurora, and the event-based serverless computing platform Lambda, after developing the latter service for its AI-driven virtual assistant Alexa. “Yes, Google Cloud and Microsoft have ‘closed the gap,’ but AWS is still more capable on breadth of offerings and the maturity of those individual services,” Anderson at Gartner says. “I would say when it comes to market perception, most customers feel Azure and AWS are effectively on par and Google slightly behind. In terms of pure capability, though, AWS is a more mature architecture and set of capabilities, and the breadth is wider.” At the AWS re:Invent conference in December of 2019, AWS said it had 175 services, with a wealth of options and flavors across compute, storage, database, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications. “Without doubt the market leader, AWS often wins on developer functionality, due to the breadth of its services as a result of its first-mover advantage,” Nick McQuire, vice president of enterprise research at CSS Insight says. “AWS has also done a good job at translating its scale into economic benefits for customers, although there are times where cloud can be cost prohibitive.” This broad set of capabilities can also be seen as a negative for some, with the service catalog representing a dizzying maze of services and options, but this level of choice has also proved a great resource for engineers. Bartoletti at Forrester, who has called AWS the cloud “everything store” for enterprise builders, points to a key difference in approach. “AWS can have three to four different database services, and they don’t care which one you use, as long as you use it at Amazon,” he notes. “Traditionally vendors would have had to pick one and run with it. That makes AWS tough to compete with.” The next phase for cloud computing The age of AWS dominance shows no sign of slowing, but the competition is fierce. “Microsoft has been able to close the gap by being open source focused and commercializing that in their cloud as fast as AWS,” Bartoletti says. “Google is working hard not to over rotate on the bleeding edge and focus on helping enterprises migrate workloads to the cloud.” Breadth and maturity of services, underpinned by strong engineering chops and relentless customer focus, look to keep AWS ahead of the curve for some time. Now, the company’s ability to simplify the adoption of new technology for enterprise customers through managed services will be the litmus test for the next wave of cloud computing adoption. It will also determine how AWS will fare against the ongoing fierce competition from Microsoft Azure and Google Cloud. “I think it is far from given that AWS will always dominate the cloud market,” Mohan at IDC says. At the same time, he acknowledges that the competitors have a lot of catching up to do. “Google is still quite a ways behind, and Microsoft, while a force, has certain advantages in the enterprise market,” Mohan says. “It is conceivable that companies will get closer, but I don’t expect any substantial changes in the next few years... There is a leap in capacity and scale that is yet to be built. All of this gives [AWS] a clearly dominant position for now.” As Warren Buffet said, “Never bet against America.” And when it comes to the public cloud market, we’ve learned it would be just as foolish to bet against Amazon. Source
  10. (Reuters) - Amazon.com Inc’s widely used cloud service, Amazon Web Services (AWS) was back up on Thursday following an outage that affected several users ranging from websites to software providers. “We have restored all traffic to Kinesis Data Streams via all endpoints and it is now operating normally,” the company said in a status update. Amazon Kinesis, a part of AWS’ cloud offerings, collects, processes and analyzes real-time data and offers insights. Video-streaming device maker Roku Inc, Adobe’s Spark platform, video-hosting website Flickr and the Baltimore Sun newspaper were among those hit by the outage, according to their posts on Twitter. AWS said it had identified the cause of the outage and taken action to prevent a recurrence, according to the status update. The company did not immediately respond to a Reuters request for comment on the cause of the outage. Source
  11. Issues reported across 28 AWS products, from Kinesis to Worskpaces, and from DynamoDB to IoT Services. Amazon Web Services (AWS), a core provider of internet infrastructure services, is going through a major outage today, and the service's spotty uptime is now causing huge issues at thousands of other online services across the internet. Almost all major cloud-based software app that rely on AWS for their backend are currently impacted, from Adobe Spark to Roku, and from Flickr to Autodesk. Other impacted services also include smart devices (currently failing to connect to their private clouds), cryptocurrency portals (failing to process transactions), and streaming and podcast services (limiting users from accessing their accounts). Among the sites who are reporting issues on the DownDetector page are services like Ring, Prime Music, Pokemon Go, Roku, MeetUp.com, League of Legends, Anchestry.com, Chime, and others. According to Amazon's status page, at the core of today's outage is AWS Kinesis, an AWS product that can be used to aggregate and analyze large quantities of data in real-time. While many of the impacted services don't use AWS Kinesis, Amazon said the Kinesis issue actually caused problems internally at the company, with errors and second-hand disruption also appearing at 27 other AWS products. While the Amazon status page looks like all services are up and running, AWS engineers said the outage is also preventing its engineers from updating its status page — beyond a small message at the top. For now, the issue appears to be limited to AWS' North America region. Per AWS, besides Kinesis, the following AWS products are also seeing similar issues: ACM Amplify Console API Gateway AppMesh AppStream2 AppSync Athena AutoScaling Batch CloudFormation CloudTrail CloudWatch Cognito Connect DynamoDB EventBridge IoT Services Lambda LEX Managed Blockchain Marketplace Personalize Resource Groups SageMaker Support Console Well Architected Workspaces Source
  12. By Mary Jo Foley for All About Microsoft Amazon has filed officially to protest Microsoft's win of the $10 billion U.S. Department of Defense JEDI cloud contract. It's not surprising, but as of today, November 14, it is official: Amazon AWS is protesting the U.S. Department of Defense's award of its $10 billion cloud contract to Microsoft. AWS made the company's decision to file paperwork to challenge the Joint Enterprise Defense Infrastructure (JEDI) public inside the company during an all-hands meeting on November 14, as reported by the Federal Times. According to a video the Federal Times said it had seen, AWS filed the required paperwork on Friday last week. Fedscoop confirmed this, saying the paperwork was filed with the Court of Federal Claims on November 8. The Federal Times said that Amazon planned to try to get the government to reveal what really happened in the awarding of JEDI. (I asked Amazon to confirm it was protesting the JEDI award, but no word back so far.) Update: An Amazon spokesperson sent me the following statement: "AWS is uniquely experienced and qualified to provide the critical technology the U.S. military needs, and remains committed to supporting the DoD's modernization efforts. We also believe it's critical for our country that the government and its elected leaders administer procurements objectively and in a manner that is free from political influence. Numerous aspects of the JEDI evaluation process contained clear deficiencies, errors, and unmistakable bias- and it's important that these matters be examined and rectified." Throughout much of the bidding process, Amazon was expected by many to be the triumphant bidder. In the later rounds, Amazon and Microsoft emerged as the two final bidders in the winner-take-all deal. (Google dropped out of the JEDI bidding late last year, while Oracle and IBM were eliminated earlier this year. ) But in August this year, the Pentagon said it was putting the JEDI contract on hold after US President Donald Trump complained about potential conflicts of interest in the process. Amazon CEO Jeff Bezos has been targeted by Trump as a political thorn because of his ownership of The Washington Post. The JEDI contract is designed to upgrade legacy systems with newer cloud services. The JEDI Cloud will provide "enterprise-level, commercial IaaS (infrastructure as a service) and PaaS (platform as a service) to the Department and any mission partners for all Department business and mission operations," the government said. The contract is expected to be worth up to $10 billion over 10 years. Microsoft was awarded the JEDI contract on October 25. Industry experts have predicted since that date that Amazon would challenge the award. Source
×
×
  • Create New...