Search the Community
Showing results for tags 'Most'.
Turk posted a topic in FileSharing NewsBy Ernesto January 17, 2014 For many people BitTorrent is synonymous with piracy, but the popular file-sharing technology has much more to offer. Over the past year BitTorrent Inc. has intensified its efforts to help artists reach millions of new fans, and not without result. A bundle of tracks from Moby’s latest album Innocents, which can be downloaded for free and completely legally, is the most-shared torrent of 2013 with an impressive 8.9 million downloads. Earlier today we published a list of the top 20 most-pirated artists. With well over 5 million downloads this chart was topped by Bruno Mars. Interestingly enough, the honor of the most-downloaded artist on BitTorrent goes to a legal download. BitTorrent Inc., the company behind the immensely popular file-sharing client uTorrent, has just released an overview detailing the successes of its “bundles” program alongside a list of the most-downloaded legal content. The company regularly teams up with artists to release free content. One of the top stars last year was Moby, who shared several tracks from his 11th studio album ‘Innocents’ for free via the popular file-sharing platform. According to BitTorrent Inc., this bundle with free tracks was the most-shared torrent of 2013, with a staggering 8.9 million downloads. Not a big surprise, as more than 85,000 people are still actively sharing the tracks at the time of writing. The second most-shared bundle is the Epic Meal Time video bundle with 8.6 million downloads, followed by Kaskade’s tracks with 4.1 million downloads. The secret behind BitTorrent’s bundle program is that the torrents are included with every download of the uTorrent and BitTorrent mainline applications. Since these clients are installed by dozens of millions of people, the numbers add up quickly. “164,383 BitTorrent Bundles are downloaded around the world, every day. These projects have been downloaded 60 million times, by fans in over 170 countries around the world,” BitTorrent Inc. notes. Bragging rights aside, the downloads also bring value to the artists in question. They are able to reach millions of new fans which may eventually boosts concert visits and sales. Moby, for example, managed to add 419,000 people to his email list via the bundle. In addition, 130,000 people clicked through to the iTunes store while 68,000 new remixes of his tracks were created. That can be considered quite a success. The top 20 most-downloaded bundles are listed below. More stats and background information on the program is available in the 2013 BitTorrent report. http://torrentfreak.com/and-the-most-downloaded-torrent-of-2013-is-completely-legal-140116
By Rahul Bali February 13, 2014 Supercomputers were introduced in the 1960s, made initially and, for decades, primarily by Seymour Cray at Control Data Corporation. While the supercomputers of the 1970s used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of “off-the-shelf” processors were the norm. Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling like computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals, physical simulations such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion. Throughout their history, they have been essential in the field of cryptanalysis. Supercomputer operating systems have undergone major transformations, as sea changes have taken place in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux. 476 of the top 500 fastest supercomputers, 95.2 percent, in the world run Linux. Linux has ruled supercomputing for years. But, it wasn’t always that way. In 1993, the first successful Linux distributions, Slackware and Debian were only just getting off the ground. Before Linux made its move, Unix was supercomputing’s dominant operating system. Since 2003, the top operating system by performance share on the Supercomputer List underwent a complete flip from 96 percent Unix to 96 percent Linux. By 2004, Linux had taken over the lead for good. According to The Linux Foundation, “Linux [became] the driving force behind the breakthroughs in computing power that have fueled research and technological innovation. In other words, Linux is dominant in supercomputing, at least in part, because it is what is helping researchers push the limits on computing power.” 10. SuperMUC (IBM) SuperMUC, the successor of the Höchstleistungsrechner Bayern II, is a new supercomputer of the Leibniz Supercomputing Centre in Garching near Munich, available for European researchers to use to probe the frontiers of medicine, astrophysics and quantum chromodynamics and other scientific disciplines such as computational fluid dynamics, computational chemistry, life sciences, genome analysis and earth quake simulations. SuperMUC was named the second fastest supercomputer in Europe in November 12, 2012 at International Supercomputing Conferencein Hamburg, Germany. It is also the fastest Intel-compatible system in the world. The SuperMUC will have 18,432 Intel Xeon Sandy Bridge-EP processors running in IBM System x iDataPlex servers with a total of 147,456 cores and a peak performance of about 3 petaFLOPS. The main memory will be 288 terabytes together with 12 petabytes of hard disk space based on the IBM General Parallel File System. It will also use a new form of cooling that IBM developed, called Aquasar, that uses hot water to cool the processors, a design that should cut cooling electricity usage by 40 percent, IBM claims. SuperMUC runs SUSE Linux Enterprise Server. 9.Vulcan (IBM) Located at the Lawrence Livermore National Laboratory in California, the Vulcan supercomputer is devoted to both government use and collaborative endeavours of industry and research universities. Its 393,216 cores run at 4.3 petaflops, which are based on IBM’s Blue Gene/Q supercomputing technology. It also consists of 24 racks and 24,576 compute nodes, placing it within LLNL’s high performance facility, together with the Sequoia. Vulcan runs Red Hat Enterprise Linux. 8. JUQUEEN (IBM) JUQUEEN at the Forschungzentrum Jülich is a 28-rack Blue Gene/Q system, and per June 2013 the highest ranked machine in Europe. The JUQUEEN will be rocking 458,752 cores with a peak performance of about 5,872 Tflops. JUQUEEN runs Red Hat Enterprise Linux. 7.Stampede (DELL) On January 7, 2013, TACC’s new cluster, “Stampede”, went into production. Stampede comprised 6400 nodes, 102400 cpu cores, 205 TB total memory, 14 PB total and 1.6 PB local storage. The bulk of the cluster consisted of 160 racks of primary compute nodes, each with dual Xeon E5-2680 8-core processors, Xeon Phi coprocessor, and 32 GB ram.The cluster also contained 16 nodes with 32 cores and 1 TB ram each, 128 “standard” compute nodes with Nvidia Kepler K20 GPUs, and other nodes for I/O (to a Lustre filesystem), login, and cluster management. A pre-production configuration of Stampede was listed as the 7th fastest supercomputer in November 2012 with a delivered performance of 2660 TFlops. Because the system was still being assembled, the submitted benchmark was run using 1875 nodes with Xeon Phi coprocessors and 3900 nodes without Xeon Phi coprocessors. In June 2013 , the benchmark was re-run using 6006 nodes (all with Xeon Phi coprocessors), delivering 5168 TFlops and moving the system up to 6th place. Stampede runs Cent OS Linux. 6. Piz Daint (Cray) The Piz Daint supercomputer is not only the sixth most powerful machine in the world but also the fourth most energy-efficient. It’s also one of only two supercomputers in the top 10 of the efficiency ranking that’s capable of maintaining petaflops performance. Located at the Swiss National Supercomputing Centre, in Lugano, Piz Daint is being used to parse huge data sets and simulate processes for projects in geophysics, materials science, chemistry, and other areas, but especially in climate modeling. To power that research, the computer uses a hybrid system that combines the advanced network architecture of the Cray XC30 with two cutting-edge processors, the Intel Xeon E5 CPU and the Nvidia Tesla K20X GPU. The computer’s 5272 compute nodes are bound together in a special low-latency network to form an unusually compact machine. Piz Daint, in operation at CSCS since April 2013, has gone through a major upgrade in October 2013. During this phase the supercomputer based on Intel® Xeon® E5 processors has more than doubled in size from 12 to 28 cabinets and has been upgraded to a hybrid architecture featuring NVIDIA® Tesla® K20X graphical processing units (GPUs). With a total of 5’272 hybrid compute nodes as well as a powerful high-bandwidth, low-latency network, it will now be possible for real simulations to with a peak performance of 7.787 petaflops. Piz Diant runs Cray Linux Environment. 5. Mira (IBM) Mira, a petascale Blue Gene/Q supercomputer, was constructed by IBM for Argonne National Laboratory’s Argonne Leadership Computing Facility with the support of the United States Department of Energy, and partially funded by the National Science Foundation.Mira will be used for scientific research, including studies in the fields of material science, climatology, seismology, and computational chemistry.The supercomputer is being utilized initially for sixteen projects, selected by the Department of Energy. It has a performance of 8.16 petaflops and consumes 3.9 MW in power. Mira supercomputer is set to run the simulation of the entire universe over 13 billion years. The supercomputer will work by tracking the movement of countless of particles as they collide and interact to combine structures that could have formed galaxies billions of years back. Mira runs Red Hat Enterprise Linux. The cost for building Mira has not been released by IBM. Early reports estimated that construction would cost US$50 million, and Argonne National Laboratory announced that Mira was bought using money from a grant of US$180 million. In a press release, IBM marketed the supercomputer’s speed, claiming that “if every man, woman and child in the United States performed one calculation each second, it would take them almost a year to do as many calculations as Mira will do in one second”. 4.K Computer (Fujitsu) The K computer – named for the Japanese word “kei” meaning 10 quadrillion – is a supercomputer manufactured by Fujitsu, currently installed at the RIKEN Advanced Institute for Computational Science campus in Kobe, Japan. The K computer is based on a distributed memory architecture with over 80,000 computer nodes. It is used for a variety of applications, including climate research, disaster prevention and medical research. K had set a LINPACK record with a performance of 8.162 petaflops, making it the fastest supercomputer in the world at the time. The K computer comprises over 80,000 2.0 GHz 8-core SPARC64 VIIIfx processors contained in 864 cabinets, for a total of over 640,000 cores,manufactured by Fujitsu with 45 nm CMOS technology. Each cabinet contains 96 computing nodes, in addition to 6 I/O nodes. Each computing node contains a single processor and 16 GB of memory. The computer’s water cooling system is designed to minimize failure rate and power consumption. Runs Red Hat Enterprise Linux. 3. Sequoia (IBM) IBM Sequoia, a petascale Blue Gene/Q supercomputer, constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). Record-breaking science applications have been run on Sequoia, the first to cross 10 petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run,while the Cardioid code, which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation. Sequoia will be used primarily for nuclear weapons simulation, replacing the current Blue Gene/L and ASC Purple supercomputers at Lawrence Livermore National Laboratory. Sequoia will also be available for scientific purposes such as astronomy, energy, lattice QCD, study of the human genome, and climate change. Sequoia is a Blue Gene/Q design, building off previous Blue Gene designs. It consists of 96 racks containing 98,304 compute nodes. The compute nodes are 16-core PowerPC A2 processor chips with 16 GB of DDR3 memory each. Thus the system contains in total 1,572,864 processor cores with 1.5 PiB memory. It covers an area of about 3,000 square feet. The computer nodes are interconnected in a 5-dimensional torus topology. In January 2013, the Sequoia sets the record for the first supercomputer using more than one million computing cores at a time for a single application. The Stanford Engineering’s Center for Turbulence Research used it to solve a complex fluid dynamics problem — the prediction of noise generated by a supersonic jet engine. 2. Titan (Cray) Titan is a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects, dealt mostly with molecular scale physics or climate models. Titan is an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units in addition to conventional central processing units. It is the first such hybrid to perform over 10 petaflops. The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade was US$60 million, funded primarily by the United States Department of Energy. Titan will itself be replaced by an approximately 200 petaflops system in 2016 as part of ORNL’s plan to operate an exascale (1000 petaflopsto 1 exaflops) machine by 2020. Titan employs AMD Opteron CPUs in conjunction with Nvidia Tesla GPUs to improve energy efficiency while providing an order of magnitude increase in computational power over Jaguar. It uses 18,688 CPUs paired with an equal number of GPUs to perform at a theoretical peak of 27 petaflops. This was enough to take first place in the November 2012. Titan runs the Cray Linux Environment. 1. Tianhe-2 (NUDT) Tianhe-2 is a 33.86 petaflops supercomputer located in Sun Yat-sen University, Guangzhou, China. It was developed by a team of 1300 scientists and engineers. It is the world’s fastest supercomputer as of November 2013. According to NUDT, Tianhe-2 will be used for simulation, analysis, and government security applications. The development of Tianhe-2 was sponsored by the 863 High Technology Program, initiated by the Chinese Government, the government of Guangdong province, and the government of Guangzhou city.It was built by China’s National University of Defense Technology in collaboration with the Chinese IT firm Inspur. With 16,000 computer nodes, each comprising two Intel Ivy Bridge Xeon processors and three Xeon Phi chips, it represents the world’s largest installation of Ivy Bridge and Xeon Phi chips, counting a total of 3,120,000 cores. Each of the 16,000 nodes possess 88 gigabytes of memory (64 used by the Ivy Bridge processors, and 8 gigabytes for each of the Xeon Phi processors). The total CPU plus coprocessor memory is 1,375 TiB. Tianhe-2 runs Kylin Linux. http://www.linuxfederation.com/10-powerful-supercomputers-running-linux
Reefa posted a topic in General NewsA team of astronomers led by Dr Robert Wittenmyer of the University of New South Wales has discovered a super-Earth orbiting near the inner edge of the habitable zone of Gliese 832 (GJ 832), a red-dwarf star previously known to host a cold Jupiter-like exoplanet. Gliese 832, also known as HD 204961 or LHS 3685, is a M1.5 dwarf located in the constellation Grus, about 16 light-years from Earth. It has about half the mass and radius of the Sun. This star is already known to harbor Gliese 832b, a cold Jupiter-like planet discovered in 2009. “With an outer giant planet and an interior potentially rocky planet, this planetary system can be thought of as a miniature version of our Solar System,” said Prof Chris Tinney, an astronomer with the University of New South Wales and a co-author of the discovery paper accepted for publication in theAstrophysical Journal (arXiv.org pre-print). The newly discovered exoplanet, labeled Gliese 832c, has an orbital period of 35.68 days, a mass 5.4 times that of Earth’s and receives about the same average energy as Earth does from the Sun. Gliese 832c might have Earth-like temperatures, albeit with large seasonal shifts, given a similar terrestrial atmosphere. “If the planet has a similar atmosphere to Earth it may be possible for life to survive, although seasonal shifts would be extreme,” Prof Tinney said. A denser atmosphere, something expected for Super-Earths, could easily make this planet too hot for life and a Super-Venus instead. The Earth Similarity Index of Gliese 832c (0.81) is comparable to exoplanetsGliese 667Cc (0.84) and Kepler-62e (0.83). This makes it one of the top three most Earth-like planets according to the ESI and the closest one to Earth of all three, a prime object for follow-up observations. Gliese 832c was discovered from its gravitational pull on its star, which causes the star to wobble slightly. Dr Wittenmyer, Prof Tinney and their colleagues used the Anglo-Australian Telescope, the 6.5-m Magellan Telescope and the European Southern Observatory 3.6-m telescope to make this new discovery. Gliese 832b and c are a scaled-down version of our own Solar System, with an inner potentially Earth-like planet and an outer Jupiter-like giant planet. Gliese 832b may well played a similar dynamical role in the system to that played by Jupiter in our Solar System. “It will be interesting to know if any additional objects in the Gliese 832 system follow this familiar Solar System configuration, but this architecture remains rare among the known exoplanet systems,” the scientists said. Source