Jump to content

Search the Community

Showing results for tags 'space'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station


  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions


  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 13 results

  1. Richard Branson’s Virgin Galactic will be the first publicly traded company for human spaceflight The race to become the first publicly traded company dedicated to human spaceflight is over, and Virgin Galactic has won. The company will be listing its shares on the New York Stock Exchange through a minority acquisition made by Social Capital Hedosophia; the special purpose acquisition company created by former Facebook executive Chamath Palihapitiya as part of his exploration of alternative strategies to venture capital investing as the head of Social Capital — according to a report in The Wall Street Journal. Formed with a $600 million commitment roughly two years ago, the SPAC is expected to make an $800 million commitment to Virgin Galactic, according to the Journal’s reporting. Unlike other launch companies like Elon Musk’s Space Exploration Technologies Corp., Virgin Galactic has focused on suborbital launches for conducting experiments and taking tourists up to space. SpaceX is investing more heavily in the development of launch capabilities for lunar and interplanetary travel — and commercial applications like Internet connectivity via satellite. Jeff Bezos’ Blue Origin also reportedly has plans for space tourism while pursuing several commercial and government launch contracts (and a lunar lander). Virgin Galactic was initially in discussions with the kingdom of Saudi Arabia for a roughly $1 billion capital infusion, but Virgin Galactic’s billionaire chief executive, Richard Branson, walked away from the deal in the wake of the kingdom’s assassination of Washington Post journalist, Jamal Kashoggi. That’s when Palihapitiya stepped in, according to the Journal. The billionaire financier needed to do something with the capital he’d raised for the Hedosophia SPAC since the investment vehicles have to make an investment within a two-year timeframe or be wound down. Likely, the Virgin Galactic business made a tempting target. The company already has roughly $80 million in commitments from people around the world willing to pay $250,000 for the privilege of a suborbital trip to the exosphere. Virgin Galactic launched as a business in 2004, two years after SpaceX made its first fledgling steps toward creating a private space industry, and was the first company to focus on space tourism and launching small satellites into orbit. The company’s commercial division, Virgin Orbit, is still competing for satellite launch capabilities. Like most privately funded space companies, Virgin Galactic was a pet project of the billionaire behind it, with the Journal estimating that Branson has put nearly $1 billion into the company already. The new $800 million means that the SPAC isn’t the only investor in Virgin Galactic. Palihapitiya is taking a $100 million investment into the company too. In return the vehicle will own roughly 49% of the spaceflight business as it trades on the open market. Image Credits: Axelle/Bauer-Griffin/FilmMagic / Getty Images Source: Richard Branson’s Virgin Galactic will be the first publicly traded company for human spaceflight
  2. After populating space with satellites and sending probes to Moon and Mars, India will have an address in the skies. Isro chief K Sivan has said India will set up its own space station, within seven years. Addressing reporters along with minister of state for department of space Jitendra Singh in New Delhi, he said the project would be an extension of Gaganyaan, India's first manned mission slated for early 2022. "We don't want to be part of the International Space Station (ISS), therefore we want to set up our own. Our station won't be very big. It will have a mass of 20 tonnes and be used for studies including microgravity tests. It will have provision for people to live for 15-20 days," Sivan said. India, meanwhile, will be part of international collaborations to send humans to Moon and Mars, and colonise the Moon, the chairman said. For three years, Isro has been quietly working on 'space docking experiment' (Spadex), a technology that is crucial for making the space station functional. The department of space had allocated Rs 10 crore for Spadex that allows transferring humans from one spacecraft to another. The immediate goal, however, will be to enable refuelling of spacecraft and transfer other systems from Earth to the station. Isro scientists on the Spadex project have been working on signal analysis equipment, high-precision videometer for navigation, docking system electronics and real-time decision making for landing systems. "As part of Spadex, we will develop and demonstrate technologies needed for docking two spacecraft (chase & target) and to control one spacecraft from the attitude control system of other spacecraft in the docked condition," Isro said. On the Gaganyaan mission, Sivan said, "We are racing against time to meet the 2022 launch deadline given by Prime Minister. As we are planning to launch the mission by December 2021, we will send our shortlisted astronauts for advanced training abroad as there is no time to set up the training centre here. However, the basic training will happen in India." Jitendra Singh said shortlisting of the Indian crew will be over in six weeks and training will be completed in a year or two. Source
  3. Black holes are great at sucking up matter. So great, in fact, that not even light can escape their grasp (hence the name). But given their talent for consumption, why don't black holes just keep expanding and expanding and simply swallow the Universe? Now, one of the world's top physicists has come up with an explanation. Conveniently, the idea could also unite the two biggest theories in all of physics. The researcher behind this latest explanation is none other than Stanford University physicist Leonard Susskind, also known as one of the fathers of string theory. He recently gave his two cents on the paradox in a series of papers, which basically suggest that black holes expand by increasing in complexity inwardly – a feature we just don't see connected while watching from afar. In other words, they expand in, not out. Weirder still, this hypothesis might have a parallel in the expansion of our own Universe, which also seems to be growing in a counterintuitive way. "I think it's a very, very interesting question whether the cosmological growth of space is connected to the growth of some kind of complexity," Susskind was quoted in The Atlantic. "And whether the cosmic clock, the evolution of the universe, is connected with the evolution of complexity. There, I don't know the answer." Susskind might be speculating on the Universe's evolution, but his thoughts on why black holes grow in more than they do out is worth unpacking. To be clear though, for now this work has only been published on the pre-print site arXiv.org, so it's yet to be peer reviewed. That means we need to take it with a big grain of salt for now. On top of that, this type of research is, by its very nature, theoretical. But there are some pretty cool idea in here worth unpacking. To do that, we need to go back to basics for a moment. So … hang tight. For the uninitiated, black holes are dense masses that distort space to the extent that even light (read: information) lacks the escape velocity required to make an exit. The first solid theoretical underpinnings for such an object emerged naturally out of the mathematics behind Einstein's general relativity back in 1915. Since then physical objects matching those predictions have been spotted, often hanging around the centre of galaxies. A common analogy is to imagine the dimensions of space plus time as a smooth rubber sheet. Much as a heavy object dimples the rubber sheet, mass distorts the geometry of spacetime. The properties of our Universe's rubber sheet means it can form deep gravity funnel that stretches 'down' without stretching much further 'out'. Most objects expand 'out' as you add material, not 'in'. So how do we even begin to picture this? Rubber sheets are useful analogies, but only up to a certain point. To understand how matter behaves against this super stretchy backdrop, we need to look elsewhere. Luckily physics has a second rulebook on 'How the Universe Works' called quantum mechanics, which describes how particles and their forces interact. The two rule books of GR and QM don't always agree, though. Small things interpreted through the lens of general relativity don't make much sense. And big things like black holes produce gibberish when the rules of quantum mechanics are applied. This means we're missing something important – something that would allow us to interpret general relativity's space-bending feature in terms of finite masses and force-mediating particles. One contender is something called anti-de Sitter/conformal field theory correspondence, which is shortened to Ads/CFT. This is a 'string theory meets four dimensional space' kind of idea, aiming to bring the best of both quantum mechanics and general relativity together. Based on its framework, the quantum complexity of a black hole – the number of steps required to return it to a pre-black hole state – is reflected in its volume. The same thinking is what lies behind another brain-breaking idea called the holographic principle. The exact details aren't for the faint hearted, but are freely available on arXiv.org if you want to get your mathematics fix for the day. It might sound a bit like downloading movies onto your desktop only to find it's now 'bigger' on the inside. As ludicrous as it sounds, in the extreme environment of a black hole more computational power might indeed mean more internal volume. At least this is what Susskind's Ads/CFT modelling suggests. String theory itself is one of those nice ideas begging for an empirical win, so we're still a long way from marrying quantum mechanics and general relativity. Susskind's suggestion that quantum complexity is ultimately responsible for the volume of a black hole has physicists thinking through the repercussions. After all, black holes aren't like ordinary space, so we can't expect ordinary rules to apply. But if anybody is worth listening to on the subject, it's probably this guy. This research is available on arXiv.org. source
  4. Future Astronauts Could Enjoy Fresh Vegetables From an Autonomous Orbital Greenhouse If humanity is going to become a spare-faring and interplanetary species, one of the most important things will be the ability of astronauts to see to their needs independently. Relying on regular shipments of supplies from Earth is not only inelegant; it’s also impractical and very expensive. For this reason, scientists are working to create technologies that would allow astronauts to provide for their own food, water, and breathable air. To this end, a team of researchers from Tomsk Polytechnic University in central Russia – along with scientists from other universities and research institutes in the region – recently developed a prototype for an orbital greenhouse. Known as the Orbital Biological Automatic Module, this device allows plants to be grown and cultivated in space and could be heading to the International Space Station (ISS) in the coming years. Since the beginning of the Space Age, numerous experiments have been conducted that demonstrated how plants can be cultivated under microgravity conditions. However, these studies were carried out using greenhouses located in the living compartments of orbital stations and involved significant limitations in terms of technology and space. Plants cultivated in the TPU autonomous greenhouse. For this reason, a research team from TPU began working to scale and improve the technologies necessary for cultivating important agricultural crops. The project team includes additional researchers from Tomsk State University (TSU), Tomsk State University of Control Systems and Radioelectronics (TUSUR), the Institute of Petroleum Chemistry and the Siberian Research Institute of Agriculture and Peat. As Aleksei Yakovlev, head of the TPU School of Advanced Manufacturing Technologies, explained in a TPU News release: The smart greenhouse project will incorporate technologies developed at TPU, which includes smart lighting that will accelerate plant growth, specialized hydroponics, automated irrigation, and harvesting solutions. At present, TPU is constructing a new testing ground so they can expand production on the smart greenhouse. The prototype greenhouse is being designed to provide astronauts with a continuous vegetarian diet. “In Tomsk, we will conduct interdisciplinary studies and solve applied problems in the field of agrobiophotonics,” said Yakovlev. “At the same time, the research team includes scientists from Tomsk, Moscow, Vladivostok, and international partners from the Netherlands specializing in climate complexes including one from Wageningen University.” In the end, Yakovlev and his colleagues envision an autonomous module that would be capable of supplying food for astronauts and potentially even docking with the ISS. They also indicated that the module would contain a cultivation area measuring 30 m² (~320 ft²) and that it would be cylindrical in shape. As Yakolev indicated, this would allow the module to be spun up to simulate different gravity conditions: These include the gravity conditions that are present on the Moon and Mars, which experience the equivalent of about 16.5% and 38% Earth gravity (0.1654 g and 0.3794 g), respectively. At present, it is unknown how well plants can grow on either body and research to that effect is still in its infancy. Hence, the information provided by this module could prove very useful if and when plans for a lunar and/or Martian colony are realized. Dwarf wheat growing in the Advanced Plant Habitat. The design and engineering that goes into the module will also take into account the kinds of conditions that are present in space, such as solar and cosmic radiation and extremes in temperature. Beyond that, the module will investigate what kinds of crops grow well in orbit. Said Yakovlev: Three TPU experiments were recently approved for transport to the ISS and will be implemented later this year. They include a device capable of 3D printing composite materials, housings for a swarm of satellites, and a multilayer nanocomposite coating that will be applied to the ISS portholes to protect against micrometeoroid impacts (Peresvet). Their implementation will begin later this year and in 2021. Source
  5. Can't just use an iPhone — Space-grade CPUs: How do you send more computing power into space? Figuring out radiation was a huge "turning point in the history of space electronics." Mars beckons. NASA 135 with 86 posters participating Phobos-Grunt, perhaps the most ambitious deep space mission ever attempted by Russia, crashed down into the ocean at the beginning of 2012. The spacecraft was supposed to land on the battered Martian moon Phobos, gather soil samples, and get them back to Earth. Instead, it ended up helplessly drifting in Low Earth Orbit (LEO) for a few weeks because its onboard computer crashed just before it could fire the engines to send the spacecraft on its way to Mars. In the ensuing report, Russian authorities blamed heavy charged particles in galactic cosmic rays that hit the SRAM chips and led to a latch-up, a chip failure resulting from excessive current passing through. To deal with this latch-up, two processors working in the Phobos-Grunt’s TsVM22 computer initiated a reboot. After rebooting, the probe then went into a safe mode and awaited instructions from ground control. Unfortunately, those instructions never arrived. Antennas meant for communications were supposed to become fully operational in the cruise stage of Phobos-Grunt, after the spacecraft left the LEO. But nobody planned for a failure preventing the probe from reaching that stage. After the particle strike, the Phobos-Grunt ended up in a peculiar stalemate. Firing on-board engines was supposed to trigger the deployment of antennas. At the same time, engines could only be fired with a command issued from ground control. This command, however, could not get through, because antennas were not deployed. In this way, a computer error killed a mission that was several decades in the making. It happened, in part, because of some oversights from the team at the NPO Lavochkin, a primary developer of the Phobos-Grunt probe. During development, in short, it was easier to count the things that worked in their computer than to count the things that didn’t. Every little mistake they made, though, became a grave reminder that designing space-grade computers is bloody hard. One misstep and billions of dollars go down in flames. Everyone involved had simply grossly underestimated the challenge of carrying out computer operations in space. Why so slow? Curiosity, everyone’s favorite Mars rover, works with two BAE RAD750 processors clocked at up to 200MHz. It has 256MB of RAM and 2GB of SSD. As we near 2020, the RAD750 stands as the current state-of-the-art, single-core space-grade processor. It’s the best we can send on deep space missions today. Compared to any smartphone we wear in our pockets, unfortunately, the RAD750’s performance is simply pathetic. The design is based on the PowerPC 750, a processor that IBM and Motorola introduced in late 1997 to compete with Intel's Pentium II. This means that perhaps the most technologically advanced space hardware up there is totally capable of running the original Starcraft (the one released in 1998, mind you) without hiccups, but anything more computationally demanding would prove problematic. You can forget about playing Crysis on Mars. Still, the price tag on the RAD750 is around $200k. Why not just throw an iPhone in there and call it a day? Performance-wise, iPhones are entire generations ahead of RAD750s and cost just $1k apiece, which remains much less than $200k. In retrospect, this is roughly what the Phobos-Grunt team tried to accomplish. They tried to boost performance and cut costs, but they ended up cutting corners. The SRAM chip in the Phobos-Grunt that was hit by a heavily charged particle went under the name of WS512K32V20G24M. It was well known in the space industry because back in 2005, T.E. Page and J.M. Benedetto had tested those chips in a particle accelerator at the Brookhaven National Laboratory to see how they perform when exposed to radiation. The researchers described the chips as "extremely" vulnerable, and single-event latch-ups occurred even at the minimum heavy-ion linear energy transfer available at Brookhaven. This was not a surprising result, mind you, because WS512K32V20G24M chips have never been meant nor tested for space. They have been designed for aircraft, military-grade aircraft for that matter. But still, they were easier to obtain and cheaper than real space-grade memories, so the Russians involved with Phobos-Grunt went for them regardless. "The discovery of the various kinds of radiation present in the space environment was among the most important turning points in the history of space electronics, along with the understanding of how this radiation affects electronics, and the development of hardening and mitigation techniques,” says Dr. Tyler Lovelly, a researcher at the US Air Force Research Laboratory. Main sources of this radiation are cosmic rays, solar particle events, and belts of protons and electrons circling at the edge of the Earth’s magnetic field known as Van Allen belts. Particles hitting the Earth’s atmosphere are composed of roughly 89% protons, 9% alpha particles, 1% heavier nuclei, and 1% solitary electrons. They can reach energies up to 10^19 eV. Using the chips not qualified for space in a probe that intended to travel through deep space for several years was asking for a disaster to happen. In fact, Krasnaya Zvezda, a Russian military newspaper, reported at that time that 62% of the microchips used on the Phobos-Grunt were not qualified for spaceflight. The probe design was 62% driven by a "let’s throw in an iPhone" mindset. Radiation becomes a thing Today, radiation is one of the key factors designers take into account when building space-grade computers. But it has not always been that way. The first computer reached space onboard a Gemini spacecraft back in the 1960s. The machine had to undergo more than a hundred different tests to get flight clearance. Engineers checked how it performed when exposed to vibrations, vacuum, extreme temperatures, and so on. But none of those testes covered radiation exposure. Still, the Gemini onboard computer managed to work pretty fine—no issues whatsoever. That was the case because the Gemini onboard computer was too big to fail. Literally. Its whooping 19.5KB of memory was housed in a 700-cubic-inch box weighing 26 pounds. The whole computer weighed 58.98 pounds. First orbital rendezvous: Gemini VI keeps station after using its on-board computer to maneuver to position near Gemini VII. NASA Generally for computing, pushing processor technology forward has always been done primarily by reducing feature sizes and increasing clock rates. We just made transistors smaller and smaller moving from 240nm, to 65nm, to 14nm, to as low as the 7nm designs we have in modern smartphones. The smaller the transistor, the lower the voltage necessary to turn it on and off. That’s why older processors with larger feature sizes were mostly unaffected by radiation—or, unaffected by so-called single event upsets (SEUs), to be specific. Voltage created by particle strikes was too low to really affect the operation of large enough computers. But when space-facing humans moved down with feature size to pack more transistors onto a chip, those particle-generated voltages became more than enough to cause trouble. Another thing engineers and developers typically do to improve CPUs is to clock them higher. The Intel 386SX that ran the so-called "glass cockpit" in space shuttles was clocked roughly at 20MHz. Modern processors can go as high as 5GHz in short bursts. A clock rate determines how many processing cycles a processor can go through in a given time. The problem with radiation is that a particle strike can corrupt data stored in an on-CPU memory (like L1 or L2 cache) only during an extremely brief moment in time called a latching window. This means in every second, there is a limited number of opportunities for a charged particle to do damage. In low-clocked processors like the 386SX, this number was relatively low. But when the clock speeds got higher, the number of latching windows per second increased as well, making processors more vulnerable to radiation. This is why radiation-hardened processors are almost always clocked way lower than their commercial counterparts. The main reason why space CPUs develop at such a sluggish pace is that pretty much every conceivable way to make them faster also makes them more fragile. Fortunately, there are ways around this issue. Dealing with radiation "In the old days, radiation effects were often mitigated by modifications implemented in the semiconductor process,” says Roland Weigand, a VISI/ASIC engineer at the European Space Agency. "It was sufficient to take a commercially available information processing core and implement it on a radiation hardened process.” Known as radiation hardening by process, this technique relied on using materials like sapphire or gallium arsenide that were less susceptible to radiation than silicon in the fabrication of microprocessors. Thus, manufactured processors worked very well in radiation-heavy environments like space, but they required an entire foundry to be retooled just to make them. "To increase performance we had to use more and more advanced processors. Considering the cost of a modern semiconductor factory, custom modifications in the manufacturing process ceased to be feasible for such a niche market as space,” Weigand says. According to him, this trend eventually forced engineers to use commercial processors prone to single-event effects. "And to mitigate this, we had to move to alternative radiation-hardening techniques, especially the one we call radiation hardening by design,” Weigand adds. The RHBD (radiation hardening by design) process enabled manufacturers to use a standard CMOS (Complementary metal–oxide–semiconductor) fabrication process. This way, space-grade processors could be manufactured in commercial foundries, bringing the prices down to a manageable level and enabling space mission designers to catch up a little to commercially available stuff. Radiation was dealt with by engineering ingenuity rather than the sheer physics of the material. "For example, Triple Modular Redundancy is one of the most popular ways to achieve increased radiation resistance of an otherwise standard chip,” Weigand explained. "Three identical copies of every single bit of information are stored in the memory at all times. In the reading stage, all three copies are read and the correct one is chosen by a majority voting.” With this approach, if all three copies are identical, the bit under examination is declared correct. The same is true as well when just two copies are identical but a third is different; the majority vote decides which bit value is the correct one. When all three copies are different, the system registers this as an error. The whole idea behind the TMR is that copies are stored at different addresses in the memory that are placed at different spots on a chip. To corrupt data, two particles would have to simultaneously strike exactly where the two copies of the same bit are stored, and that is extremely unlikely. The downside to TMR, though, is that this approach leads to a lot of overhead. A processor has to go through every operation thrice, which means it can only reach one-third of its performance. Thus, the latest idea in the field is to get space-grade processors even closer to their commercially available counterparts. Instead of designing an entire system on chip with radiation-hard components, engineers choose where radiation hardness is really necessary and where it can safely be dispensed with. That’s a significant shift in the design priorities. Space-grade processors of old were built to be immune to radiation. Modern processors are not immune anymore, but they are designed to automatically deal with all kinds of errors radiation may cause. The LEON GR740, for example, is the latest European space-grade processor. It’s estimated to experience a staggering 9 SEUs a day on a geostationary Earth orbit. The trick is that all those SEUs are mitigated by the system and do not lead to functional errors. The GR740 is built to experience one functional error every 300 or so years. And even if that happens, it can recover just by rebooting. Enlarge / A GR740 Evaluation Board from March 2016 ESA Europe goes open source The LEON line of space-grade processors working in SPARC architecture is by far the most popular choice for space in Europe today. "Back in the 1990s, when the SPARC specification was chosen, it had significant industry penetration,” says Weigand. “Sun Microsystems was using SPARC on their successful workstations.” According to him, the key reasons behind going to SPARC were existing software support and openness. "An open source architecture meant anybody could use it without licensing issues. That was particularly important since in such a niche market as space, the license fee is distributed among a very limited number of devices, which can increase their prices dramatically," he explains. Ultimately, ESA learned about the issues with licensing the hard way. The first European space-grade SPARC processor—the ERC32, which is still in use today—was using commercial information processing cores. It was based on an open source architecture, but the processor design was proprietary. "This led to problems. With proprietary designs you usually don’t have access to the source code, and thus making the custom modifications necessary to achieve radiation hardening is difficult,” says Weigand. That’s why in the next step, ESA started working on its own processor, named LEON. "The design was fully under our control, so we were finally free to introduce all RHBD techniques we wanted." The latest development in the line of LEON processors is the quad-core GR740 clocked at roughly 250MHz. ("We’re expecting to ship first flight parts towards the end of 2019,” Weigand says.) The GR740 is fabricated in the 65nm process technology. The device is a system-on-chip designed for high-performance, general-purpose computing based on the SPARC32 instruction set architecture. "The goal in building the GR740 was to achieve higher performance and capability to have additional devices included in one integrated circuit while keeping the whole system compatible with previous generations of European space-grade processors,” says Weigand. Another feature of the GR740 is advanced fault-tolerance. The processor can experience a significant number of errors caused by radiation and ensure uninterrupted software execution nonetheless. Each block and function of the GR740 has been optimized for best possible performance. This meant that components sensitive to single event upsets were used alongside the one that could withstand them easily. All SEU-sensitive parts have been implemented with a scheme designed to mitigate possible errors through redundancy. For example, some flip-flops (basic processor components that can store either 1s or 0s) in the GR740 are off-the-shelf commercial parts known as CORELIB FFs. The choice to use them was made because they took less space on the chip and thus increased its computational density. The downside was that they were susceptible to SEUs, but this vulnerability has been dealt with by the Block TMR correction scheme where every bit read from those flip-flops is voted on by modules arranged with adequate spacing among them to prevent multiple bit upsets (scenarios where one particle can flip multiple bits at once). There are similar mitigation schemes implemented for L1 and L2 cache memories composed of SRAM cells, which are also generally SEU-sensitive. When the penalty such schemes inflicted on performance was eventually considered too high, ESA engineers went for SEU-hardened SKYROB flip-flops. Those, however, took twice the area of CORELIBs. Ultimately when thinking about space and computing power, there was always some kind of trade-off to make. So far, the GR740 passed several radiation tests with flying colors. The chip has been bombarded with heavy ions with linear energy transfer (LET) reaching 125 MeV.cm^2/mg and worked through all of this without hiccups. To put that in perspective, feral SRAM chips that most likely brought down the Phobos-Grunt latched up when hit with heavy ions of just 0.375 MeV.cm^2/mg. The GR740 withstood levels of radiation over 300 times higher than what Russians had put in their probe. Besides a near-immunity to single-event effects, the GR740 is specced to take up to 300 krad(Si) of radiation in its lifetime. In the testing phase, Weigand’s team even had one of the processors irradiated to 292 krad(Si). Despite that, the chip worked as usual, with no signs of degradation whatsoever. Still, specific tests to check the actual total ionizing dose the GR740 can take are yet to come. All those numbers combined mean that the processor working at the geostationary Earth orbit should experience one functional error every 350 years. In LEO, this time should be around 1,310 years. And even those errors wouldn’t kill the GR740. It would just need to do a reset. Enlarge / Unlike the ESA, NASA opted for proprietary work: In 2017, it selected Boeing for the High Performance Spaceflight Computing Processor (Chiplet) contract for the development of prototype Chiplet devices. NASA America goes proprietary "Space-grade CPUs developed in the US have traditionally been based on proprietary processor architectures such as PowerPC because people had more extensive experience working with them and they were widely supported in software,” says the Air Force Research Labs’ Lovelly. After all, the history of space computation began with digital processors delivered by IBM for the Gemini mission back in the 1960s. And the technology IBM worked with was proprietary. To this day, BAE RAD processors are based on the PowerPC, which was brought to life by a consortium of IBM, Apple, and Motorola. Processors powering glass cockpits in the Space Shuttles and Hubble Space telescope were made in the x86 architecture introduced by Intel. Both PowerPC and x86 were proprietary. So in carrying with the tradition, the latest American design in this field is proprietary, too. Named High Performance Spaceflight Computing (HPSC), the only difference is that PowerPC and x86 were best known from desktop computers. The HPSC is based on the ARM architecture that today works in most smartphones and tablets. The HPSC has been designed by NASA, Air Force Research Laboratory, and Boeing, which is responsible for manufacturing the chips. The HPSC is based on the ARM Cortex A53 quad-core processors. It will have two such processors connected by an AMBA bus, which makes it an octa-core system. This should place its performance somewhere in the range of mid-market 2018 smartphones like Samsung Galaxy J8 or development boards like HiKey Lemaker or Raspberry Pi. (That’s before radiation hardening, which will cut its performance by more than half that.) Nevertheless, we’re no longer likely to read bleak headlines screaming that 200 processors powering the Curiosity rover would not be enough to beat one iPhone. With the HPSC up and running, this is more likely to be three or four chips required to get iPhone-like computing power. "Since we do not yet have an actual HPSC for tests, we can make some educated guesses as to what its performance may be like,” says Lovelly. Clock speed was the first aspect to go under scrutiny. Commercial Cortex A53 octa-core processors are usually clocked between 1.2GHz (in the HiKey Lemaker for example) and 1.8GHz (in the Snapdragon 450). To estimate what the clock speed would look like in the HPSC after radiation hardening, Lovelly compared various space-grade processors with their commercially available counterparts. "We just thought it reasonable to expect a similar hit on performance,” he says. Lovelly estimated HPSC clock speed at 500MHz. This would still be exceptionally fast for a space-grade chip. In fact, if this turned out to be true for the flight version, the HPSC would have the highest clock rate among space-grade processors. But more computing power and higher clock rates usually come at a dear price in space. BAE RAD5545 is probably the most powerful radiation-hardened processor available today. Fabricated in the 45nm process, it is a 64-bit quad-core machine clocked at 466MHz with power dissipation of up to 20 Watts—and 20 Watts is a lot. A Quad Core i5 sitting in a 13-inch MacBook Pro 2018 is a 28 Watt processor. It can heat its thin aluminum chassis to really high temperatures up to a point where it becomes an issue for some users. Under more computationally intensive workloads, fans immediately kick in to cool the whole thing down. The only issue is that, in space, fans would do absolutely nothing, because there is no air they could blow onto a hot chip. The only possible way to get heat out of a spacecraft is through radiation, and that takes time. Sure, heat pipes are there to take excessive heat away from the processor, but this heat has to eventually go somewhere. Moreover, some missions have tight energy budgets, and they simply can’t use powerful processors like RAD5545 under such restrictions. That’s why the European GR740 has power dissipation at only 1.5 Watts. It’s not the fastest of the lot, but it is the most efficient. It simply gives you the most computational bang per Watt. The HPSC with 10 Watt power dissipation comes in at a close second, but not always. "Each core on the HPSC has its own Single Instruction Multiple Data unit,” says Lovelly. "This gives it a significant performance advantage over other space-grade processors.” SIMD is a technology commonly used in commercial desktop and mobile processors since the 1990s. It helps processors handle image and sound processing in video games better. Let’s say we want to brighten up an image. There are a number of pixels, and each one has a brightness value that needs to be increased by two. Without SIMD, a processor would need to go through all those additions in sequence, one pixel after the other. With SIMD, though, the task can be parallelized. The processor simply takes multiple data points—brightness values of all the pixels in the image—and performs the same instruction, adding two to all of them simultaneously. And because the Cortex A53 was a processor designed for smartphones and tablets that handled a lot of media content, the HPSC can do this trick as well. "This is particularly beneficial in tasks like image compression, processing, or stereo vision,” says Lovelly. "In applications that can’t utilize this feature, the HPSC performs slightly better than the GR740 and other top-performing space processors. But when it comes to things where it can be used, the chip gets well ahead of the competitors.” Making space exploration sci-fi again Chip designers in the US tend to go for more powerful, but more energy-hungry, space-grade processors because NASA aims to run more large-scale robotic and crewed missions compared to its European counterparts. In Europe, there are no current plans to send humans or car-sized planetary rovers to the Moon or Mars in the predictable future. The modern ESA is more focused on probes and satellites, which usually work on tight energy budgets, meaning something light, nimble, and extremely energy-efficient like the GR740 makes much more sense. The HPSC, in turn, has been designed from the ground up to make at least some of NASA’s at-times sci-fi ambitions reality. Back in 2011, for instance, NASA’s Game Changing Development Program commissioned a study to determine what space computing needs would look like in the next 15 to 20 years. A team of experts from various NASA centers came up with a list of problems advanced processors could solve in both crewed and robotic missions. One of the first things they pointed to was advanced vehicle health management, which they deemed crucial for sending humans on long deep space missions. It boils down to having sensors constantly monitoring the health of crucial components. Fast processors are needed to get data from all those sensors at high frequencies. A sluggish computer could probably cope with this task if the sensor readouts got in every 10 minutes or so, but if you want to do the entire checkup multiple times a second to achieve something resembling real-time monitoring, the processor needs to be really fast. All of this would need to be devised to have astronauts seated in front of consoles showing the actual condition of their spaceship with voiced alerts and advanced graphics. And running such advanced graphics would also demand fast computers. The team called that "improved displays and controls.” But the sci-fi aspirations do not end at flight consoles. Astronauts exploring alien worlds could likely have augmented reality features built right into their visors. The view of a physical environment around them will be enhanced with computer-generated video, sound, or GPS data. Augmentation would in theory provide situational awareness, highlighting areas worthy of exploring and warning against potentially dangerous situations. Of course, having the AR built into the helmets is only one possible option. Other notable ideas mentioned in the study included hand-held, smartphone-like devices and something vaguely specified as "other display capabilities" (whatever those other capabilities may be). Faster space-grade processors would be needed to power such computing advances. Faster space-grade processors are meant to ultimately improve robotic missions as well. Extreme terrain landing is one of the primary examples. Choosing a landing site for a rover is a tradeoff between safety and scientific value. The safest possible site is a flat plane with no rocks, hills, valleys, or outcrops. The most scientifically interesting site, however, is geologically diverse, which usually means that it is packed with rocks, hills, valleys, and outcrops. So called Terrain Relative Navigation (TRN) capability is one of the ways to deal with that. Rovers equipped with the TRN could recognize important landmarks, see potential hazards, and navigate around them, narrowing down the landing radius to less than 100 meters. The problem is that current space-grade processors are way too slow to process images at such a rate. So the NASA team behind the study ran a TRN software benchmark on the RAD 750 and found the update from a single camera took roughly 10 seconds. Unfortunately, 10 seconds would be a lot when you’re falling down to the Martian surface. To land a rover within 100-meter radius, an update from a camera would have to be processed every second. For a pinpoint, one meter landing, estimates would need to come at 10Hz, which is 10 updates per second. Other things on NASA’s computational wishlist include algorithms that can predict impending disasters based on sensor readouts, intelligent scheduling, advanced autonomy, and so on. All this is beyond the capabilities of current space-grade processors. So in the study, NASA engineers estimated how much processing power would be needed to efficiently run those things. They found that spacecraft health management and extreme terrain landing needed between 10 and 50 GOPS (gigaoperations per second). Futuristic sci-fi flight consoles with fancy displays and advanced graphics needed somewhere between 50 and 100 GOPS. The same thing goes for augmented reality helmets or other devices; these also consumed between 50 and 100 GOPS. Ideally, future space-grade processors would be able to power all those things smoothly. Today, the HPSC running at a power dissipation between 7 and 10 Watts can process 9 to 15 GOPS. This alone would make extreme landing possible, but the HPSC is designed in such a way that this figure can go up significantly. First, those 15 GOPS do not include performance benefits that the SIMD engine brings to the table. Second, the processor can work connected to other HPSCs and external devices like special-purpose processors, FPGAs, or GPUs. Thus, a future spaceship can potentially have multiple distributed processors working in parallel with specialized chips assigned to certain tasks like image or signal processing. No matter where humanity’s deep space dreams go next, we won’t have to wait that long for engineers to know where the current computing power stands. The LEON GR740 is scheduled for delivery to ESA at the end of this year, and after a few additional tests it should be flight ready in 2020. The HPSC, in turn, is set for a fabrication phase that should begin in 2021 and last until 2022. Testing is expected to take a few months in 2022. NASA should get flight-ready HPSC chips by the end of 2022. That means, all other complicating timeline factors aside, at least the future of space silicon appears on track to be ready for spaceships taking humans back to the Moon in 2024. Jacek Krywko is a science and technology writer based in Warsaw, Poland. He covers space exploration and artificial intelligence research, and he has previously written for Ars about facial-recognition screening, teaching AI-assistants new languages, and AI in space. Source: Space-grade CPUs: How do you send more computing power into space? (Ars Technica)
  6. Some days, you might feel like a pretty substantial person. Maybe you have a lot of friends, or an important job, or a really big car. But it might humble you to know that all of those things – your friends, your office, your really big car, you yourself, and even everything in this incredible, vast Universe – are almost entirely, 99.9999999 percent empty space. Here’s the deal. As I previously wrote in a story for the particle physics publication Symmetry, the size of an atom is governed by the average location of its electrons: how much space there is between the nucleus and the atom’s amorphous outer shell. Nuclei are around 100,000 times smaller than the atoms they’re housed in. If the nucleus were the size of a peanut, the atom would be about the size of a baseball stadium. If we lost all the dead space inside our atoms, we would each be able to fit into a particle of dust, and the entire human species would fit into the volume of a sugar cube. So then where does all our mass come from? SciFri Energy! At a pretty basic level, we’re all made of atoms, which are made of electrons, protons, and neutrons. And at an even more basic, or perhaps the most basic level, those protons and neutrons, which hold the bulk of our mass, are made of a trio of fundamental particles called quarks. But, as I explained in Symmetry, the mass of these quarks accounts for just a tiny per cent of the mass of the protons and neutrons. And gluons, which hold these quarks together, are completely massless. A lot of scientists think that almost all the mass of our bodies comes from the kinetic energy of the quarks and the binding energy of the gluons. So if all of the atoms in the Universe are almost entirely empty space, why does anything feel solid? The idea of empty atoms huddling together, composing our bodies and buildings and trees might be a little confusing. If our atoms are mostly space, why can’t we pass through things like weird ghost people in a weird ghost world? Why don’t our cars fall through the road, through the centre of the Earth, and out the other side of the planet? Why don’t our hands glide through other hands when we give out high fives? It’s time to reexamine what we mean by empty space. Because as it turns out, space is never truly empty. It’s actually full of a whole fistful of good stuff, including wave functions and invisible quantum fields. You can think about the empty space in an atom as you might think about an electric fan with rotating blades. When the fan isn’t in motion, you can tell that a lot of what’s inside of that fan is empty space. You can safely stick your hand into the space between the blades and wiggle your fingers in the nothingness. But when that fan is turned on it’s a different story. If you’re silly enough to shove your hand into that 'empty space', those blades will inevitably swing around and smack into it… relentlessly. Technically electrons are point sources, which means they have no volume. But they do have something called a wave function occupying a nice chunk of the atom. And because quantum mechanics likes to be weird and confusing, the volume-less electron is somehow simultaneously everywhere in that chunk of space. The blades of the fan are akin to electrons zipping around the atom, occupying chunks of space with their wave functions. It’s a painful reminder that what might seem like empty space can feel pretty solid. You've never really touched anything in your life Elizabeth Ann Colette/Flickr Are you sitting down for this? Well, you’re not really. Your butt isn’t actually touching the chair you’re sitting on. Since the meat of your atoms is nestled away in nuclei, when you 'touch' someone (or something), you aren’t actually feeling their atoms. What you’re feeling is the electromagnetic force of your electrons pushing away their electrons. On a very, very technical level, you’re not actually sitting on that chair. You’re hovering ever so slightly above it. So to conclude: Your very important human body is really, kind of, in a way, just a misleading collection of empty spaces on an empty planet in an empty Universe. But at least you have a big car. Source BTW, If I am 99.99% space and I never touched anything, I don't exist. I reject sciences and its theories.
  7. In 2004, astronomers spotted a unique celestial feature that, when observed in ultraviolet, bore the appearance of a glowing eye staring directly back at Earth. Unbeknownst to them, it would take 16 years to figure out what caused the Blue Ring Nebula to form. The Blue Ring Nebula Image: NASA The Blue Ring Nebula is the remnant of two merging stars, according to new research published today in Nature. The bright yellow dot at the center of the feature is the surviving stellar remnant (the merger of two stars); the blue ring is an expanding cloud of debris interacting with the stellar medium; and the magenta outline is the leading edge of the shockwave. To be clear, the blue and magenta colors seen in the nebula represent ultraviolet light, and they can’t actually be seen with the naked eye. Colliding stars are spotted all the time, so this discovery might not seem very special or exciting. What is special about all of this, however, is the timing of our observations, as astronomers are witnessing this stellar merger some 5,000 years after it happened (excluding the time it took the light to reach Earth). Typically, colliding stars are seen immediately after their smash-up. “The merging of two stars is fairly common, but they quickly become obscured by lots of dust as the ejecta from them expands and cools in space, which means we can’t see what has actually happened,” explained Keri Hoadley, the lead author of the new study and a physicist at the California Institute of Technology, in a statement. “We think this object represents a late stage of these transient events, when the dust finally clears and we have a good view.” To which she added: “But we also caught the process before it was too far along; after time, the nebula will dissolve into the interstellar medium, and we would not be able to tell anything happened at all.” Diagram showing how the nebula would appear when viewed from the side. Graphic: Mark Seibert The Blue Ring Nebula, therefore, is not something we’ve ever seen before, and it’s providing some new science. The expanding blue ring, for example, is actually one of a pair. We can’t really see it from our perspective on Earth, but this nebula is actually shaped like two funnels connected together at their points, with the merged star at center. These rings are moving away from each other in opposite directions, at speeds reaching 250 miles per second (400 kilometers per second). The team calls this feature a “biconical symmetrical outflow,” and it’s providing some important clues as to what happened during the fatal encounter. Speaking at a press conference on Tuesday, Mark Seibert, a co-author and astrophysicist with the Carnegie Institution for Science, said that “everything we’ve learned is interesting” and that the merged star “is one of a kind right now.” Indeed, this stellar object is somewhat of an oddball, in that it’s surrounded by a disk—the material from which is pouring directly into the star. Diagram showing how we see the feature from Earth. Graphic: Mark Seibert This story started in 2004 when Seibert, while participating in NASA’s Galaxy Evolution Explorer mission (GALEX), was one of the first to gaze upon the Blue Ring Nebula. At the time, Chris Martin, a co-author of the new study and a physicist at Caltech, thought “it was a really interesting object” and that his team “should come up with a nice paper within the year to explain it all,” as he told reporters yesterday. This paper “within the year,” however, was not to be, as the nature of the object remained elusive. Data gathered by Caltech’s Hale Telescope at Palomar Observatory and the W. M. Keck Observatory in Hawaii revealed the presence of a shockwave around the star, which hinted at something violent and dramatic. An early hypothesis was that a hot Jupiter-like planet was caught in a death spiral around the star, and we were witnessing its destruction in the form of the nebula. But the researchers could not be certain, and “ambiguities remained about it throughout the study,” said Martin. The scientists also looked into the status of the central star, known as TYC 2597-735-1, finding that it was quite old and no longer burning hydrogen in its core. Many other aspects of the object did not match their expectations for stars. Calling it a “Sherlock Holmes mystery,” Martin said it “became impossible to come up with a scenario to explain all of these observations,” so “after a few years of study, we went on to do other things, and the project laid dormant for a while.” Things changed in 2017 when Hoadley joined Martin’s group as a postdoctoral fellow, and she appeared eager to take the lead on this hibernating project. “I heard about it on the second day of the job and was immediately hooked,” she said at the virtual press conference. The team had collected lots of data, but her challenge was to “figure out how to put all the pieces together” and to especially determine how it was that the nebula was glowing in the first place. As work progressed, however, the team began to realize they weren’t dealing with a planet and that the likely scenario involved a stellar collision. The mass of material being ejected by the star, for example, was too much for a planet, as Hoadley explained. What’s more, data collected by the Habitable Zone Planet Finder on the Hobby-Eberly Telescope in Texas found no evidence of a planet within this system. Things got weirder when the team took a look at archival data collected by NASA’s Spitzer Space Telescope and the Wide-field Survey Explorer (WISE), along with other infrared observatories. This data pointed to the presence of an accretion disk around the star. Normally, these rings of dust are seen around young stars, but TYC 2597-735-1 is actually quite old. To help make sense of all this data, the team recruited astrophysicist Brian Metzger from Columbia University, an expert in cosmic mergers. Excitingly, Metzger’s mathematical and computational models meshed well with observations made of the Blue Ring Nebula. “It wasn’t just that Brian could explain the data we were seeing; he was essentially predicting what we had observed before he saw it,” explained Hoadley in a Caltech press release. “He’d say, ‘If this is a stellar merger, then you should see X,’ and it was like, ‘Yes! We see that!’” So here’s the story of the Blue Ring Nebula, as revealed in the new paper. Thousands of years ago, a small star was orbiting a larger star with roughly the same mass as our Sun. As the larger star got older, however, it became bloated, expanding to reach very close to its smaller companion. The smaller star—around one-tenth the size of our Sun—fell into a downward spiral that produced a gaseous disk. The big star eventually subsumed the smaller star, producing an expanding cloud of debris that was sliced in half by the disk. This merger resulted in the two cone-shaped debris clouds we see today. In the thousands of years that followed, the expanding cloud of debris cooled, forming hydrogen molecules that interacted with the stellar medium. Today, we see these collisions as bright ultraviolet emissions. The cloud is now “dissolving into the interstellar medium,” and “we’re just happening to catch it as all the exciting particles are present,” Hoadley told reporters. Hoadley expects the Blue Ring Nebula to last for another thousand to a few tens of thousands of years, after which time the feature will disappear completely. It’s a veritable blink of the eye in cosmological terms, and a wonderful opportunity to do some fascinating science. Source
  8. Documents that were leaked by NSA whistleblower Edward Snowden prove that the United States has been ruled by a race of tall, white space aliens who also assisted the rise of Nazi Germany in the 1930s. A Federal Security Services (FSB) report on the nearly two million highly classified top-secret documents obtained from the United States Department of Defense (DOD) run National Security Agency-Central Security Service (NSA/CSS) by the American ex-patriot Edward Snowden states that this information is providing “incontrovertible proof” that an “alien/extraterrestrial intelligence agenda” is driving US domestic and international policy. Snowden, who has been given asylum in Russia, leaked documents that a race of extraterestrial “tall whites” arrived on Earth, helped Nazi Germany build a fleet of advanced submarines in the 1930s, and then met in 1954 with President Dwight Eisenhower “where the ‘secret regime’ currently ruling over America was established. This FSB report warns that the ‘Tall White’ agenda being implemented by the ‘secret regime’ ruling the United States calls for the creation of a global electronic surveillance system meant to hide all true information about their presence here on earth as they enter into what one of Snowden’s documents calls the ‘final phase’ of their end plan for total assimilation and world rule.” The report further says that the U.S. government is embroiled in a “cataclysmic” power struggle between President Obama, who heads the alien shadow government, and some unknown force that opposed the U.S.-alien alliance. “Most to be feared by Russian policy makers and authorities, this [FSB] report concludes, is if those opposing the ‘Tall White’ ‘secret regime’ ruled over by Obama have themselves aligned with another alien-extraterrestrial power themselves.” Any good conspiracy theory needs a patina of truth, a bit of intellectual cover to camouflage the craziness. In this case, the whatdoesitmean.com/Fars story cites Paul Hellyer, the 1960s Canadian defense minister who is now a fervent UFO activist. Hellyer allegedly has confirmed the accuracy of Snowden’s UFO leaks. Source ======================== A rebuttal has been found, claiming the source is a "semi-official" news agency.
  9. After being continuously inhabited for more than 13 years, it is finally possible to log into Ustream and watch the Earth spinning on its axis in glorious HD. This video feed (embedded below) comes from from four high-definition cameras, delivered by last month’s SpaceX CRS-3 resupply mission, that are attached to the outside of the International Space Station. You can open up the Ustream page at any time, and as long as it isn’t night time aboard the ISS, you’ll be treated to a beautiful view of the Earth from around 250 miles (400 km) up. http://www.ustream.tv/channel/17074538 This rather awesome real-time video stream (which also includes the ISS-to-mission control audio feed) comes by way of the High Definition Earth Viewing experiment. HDEV is notable because it consists of four, commercial off-the-shelf (COTS) high-definition video cameras that are each enclosed in a pressurized box, but otherwise they exposed to the rigors of space (most notably cosmic radiation). The purpose of HDEV, beyond providing us with a live stream of our own frickin’ planet, is to see if commercial cameras are viable for future space missions, potentially saving a lot of money (space cameras have historically been expensive, custom-designed things). HDEV, which consists of just a single enclosure, was delivered to the ISS a couple of weeks ago by SpaceX CRS-3. The box was connected up to the underside of the ISS via EVA/spacewalk, with one camera pointing forward (Hitachi), two cameras facing aft (Sony/Panasonic), and one pointing nadir (Toshiba, down towards Earth). If you watch the stream you will notice that it hops between the four cameras in sequence, with gray and black color slates in between each switch. If the feed is permanently gray then HDEV is switched off — or communications have been lost. Also note that the ISS has an orbital period of just 93 minutes — for a considerable part of that time the station is in the Earth’s shadow and can’t see much. Inside the HDEV box. The Hitachi camera is in the top left, the Sony and Panasonic cameras are in the top right, and the Toshiba camera is along the bottom edge. HDEV operational diagram The active video camera is connected to the ISS Columbus module via an Ethernet link, and then beamed down to the ground. From there, it looks like the video feed is combined with the current ISS-to-mission control audio feed, and then simply uploaded to Ustream. It’s an impressively simple (and cheap) setup. It’s also worth mentioning that parts of HDEV were designed by American high school students through NASA’s HUNCH program. It’s good to see NASA fostering the next generation of astronauts and scientists! A very cloudy Spain, as seen from the International Space Station Just off the east coast of Spain, the Mediterranean. With the north coast of Africa in the distance I think. In this photo, the International Space Station is moving into night time (pre-dawn) above Sudan in Africa The photos in this story are screenshots from the video feed. I think they’re mostly of Spain and north Africa (the top photo is of Libya). It seems to be a pretty cloudy on Earth today, though — I watched the feed for a couple of hours and never really got a clear shot of the ground. Source
  10. The key to a successful Mars landing is the same thing that matters in landing on any planet: You have to slow down before you hit the ground. That's why scientists are testing a new supersonic parachute that they hope will advance the technology needed to land heavier-than-ever spacecrafts—like the kind that will eventually carry humans to the Red Planet. This week, NASA engineers are gathered at Hawaii's Pacific Missile Range Facility to launch their new Low-Density Supersonic Decelerator (LDSD), a complex package of devices including an inflatable flying saucer and a huge parachute designed for Mars landings. The name of the test vehicle is Keiki o ka honua, or "child from earth" in Hawaiian. Parachutes have long helped devices touch down on Mars—dating back to the first successful robot landing in July 1976—but this parachute is much, much bigger than the ones that have been used in previous missions. At 110 feet in diameter, the Keiki o ka honua parachute is more than double the size of the one that carried the rover Curiosity down to the surface of Mars in August 2012. Front page news about the first successful Mars landing in 1976. (New York Times screenshot) To simulate the speed of spacecraft coming in for a landing in the super-thin atmosphere of Mars, scientists are testing the LDSD at an altitude of 180,000 feet—that's 34 miles away from the surface of the Earth, or about 10 miles farther than the place where stuntman Felix Baumgarter jumped to parachute from near-space down to Earth. "We have to go halfway to the edge of space," said Ian Clark, LDSD principal investigator, in a conference call with reporters on Monday. LDSD will get two-thirds of the way to its mark in a balloon and the remainder of the way in a rocket. The balloon is big enough to "fit snugly into Pasadena's Rose Bowl," according to NASA. A fraction of a second after the saucer drops from the balloon at 120,000 feet, a rocket engine will shoot it another 60,000 feet toward outer space. Just getting the device to the point where the rocket takes over means overcoming any number of potential problems. "If we fire that motor and we get data back from it, that is a great day," said LDSD project manager Mark Adler. Artist's rendering of Keiki o ka honua rocketing to 180,000 feet. (NASA/JPL-Caltech) If all goes as planned, once the LDSD gets to 180,000 feet, a Kevlar tube will inflate around the device—this tube helps create drag to slow it down as it falls, and collects data along the way. At maximum speed, the whole package will travel at four times the speed of sound, and heat up to near 600 degrees Fahrenheit, which is about the temperature a pizza would experience inside a brick oven. In order to withstand that kind of heat, the inflatable device is made from material "similar to the Kevlar we use to build bullet proof vests," Clark says. By the time the parachute opens, the device will have slowed down to Mach 2.5—or about 1,900 miles per hour and more than twice as fast as the speed of sound. It will still be another 45 minutes until the Keiki o ka honua touches down in the Pacific Ocean, where scientists will collect it. The device will be outfitted with four GoPro cameras, plus several other cameras. A livestream of the launch will run on NASA's website. Weather conditions permitting, it could happen as early as tomorrow—Tuesday, June 3—beginning around 2 p.m. ET. NASA plans to test two more saucer-shaped vehicles in Hawaii about a year from now. Source
  • Create New...