The moon shouldn't feel too jealous. Earth has another satellite right now, but it's only a temporary fling. The exact identity of the object, named 2020 SO, is still a lingering question, but you can watch it on Monday, Nov. 30, when it gets close to Earth. The Virtual Telescope Project will livestream the flyby.
The Earth's gravitational pull captured the object into our planet's orbit earlier this month, which makes 2020 SO a sort of mini-moon.
Usually, we'd expect an object like this to be an asteroid, and there are plenty of those flying around in space. But 2020 SO may have a more Earthly identity. The orbit of 2020 SO around the sun -- which is very similar to Earth's -- has convinced researchers it's probably not a rock, but is actually a piece of space junk from a NASA mission.
CNET Science
From the lab to your inbox. Get the latest science stories from CNET every week.
Virtual Telescope Project founder Gianluca Masi already managed to capture a view of the tiny object on Nov. 22. It appears as a dot against a backdrop of stars.
"One of the possible paths for 2020 SO brought the object very close to Earth and the Moon in late September 1966," CNEOS Director Paul Chodas said in a NASA statement earlier in November. "It was like a eureka moment when a quick check of launch dates for lunar missions showed a match with the Surveyor 2 mission."
NASA's ill-fated Surveyor 2 lander ended up crashing on the moon's surface, but the Centaur rocket booster escaped into space.
NASA expects 2020 SO to stick around in an Earth orbit until March 2021 when it will wander off into a new orbit around the sun. The agency's Planetary Defense Coordination Office shared a visual of the object's journey around Earth.
The upcoming close approach should give astronomers a chance to dial in 2020 SO's composition and tell us if it is indeed a relic from the 1960s.
Even with a telescope view, 2020 SO should look like a bright spot of light traveling against the dark of space. The cool thing is getting the chance to witness a piece of space history returning to its old stomping grounds.
Researchers from the Centre of Excellence for Quantum Computation and Communication Technology (CQC2T) working with Silicon Quantum Computing (SQC) have located the 'sweet spot' for positioning qubits in silicon to scale up atom-based quantum processors.
Creating quantum bits, or qubits, by precisely placing phosphorus atoms in silicon—the method pioneered by CQC2T Director Professor Michelle Simmons—is a world-leading approach in the development of a silicon quantum computer.
In the team's research, published today in Nature Communications, precision placement has proven to be essential for developing robust interactions—or coupling—between qubits.
"We've located the optimal position to create reproducible, strong and fast interactions between the qubits," says Professor Sven Rogge, who led the research.
"We need these robust interactions to engineer a multi-qubit processor and, ultimately, a useful quantum computer."
Two-qubit gates—the central building block of a quantum computer—use interactions between pairs of qubits to perform quantum operations. For atom qubits in silicon, previous research has suggested that for certain positions in the silicon crystal, interactions between the qubits contain an oscillatory component that could slow down the gate operations and make them difficult to control.
"For almost two decades, the potential oscillatory nature of the interactions has been predicted to be a challenge for scale-up," Prof. Rogge says.
"Now, through novel measurements of the qubit interactions, we have developed a deep understanding of the nature of these oscillations and propose a strategy of precision placement to make the interaction between the qubits robust. This is a result that many believed was not possible."
Finding the 'sweet spot' in crystal symmetries
The researchers say they've now uncovered that exactly where you place the qubits is essential to creating strong and consistent interactions. This crucial insight has significant implications for the design of large-scale processors.
"Silicon is an anisotropic crystal, which means that the direction the atoms are placed in can significantly influence the interactions between them," says Dr. Benoit Voisin, lead author of the research.
"While we already knew about this anisotropy, no one had explored in detail how it could actually be used to mitigate the oscillating interaction strength."
"We found that there is a special angle, or sweet spot, within a particular plane of the silicon crystal where the interaction between the qubits is most resilient. Importantly, this sweet spot is achievable using existing scanning tunnelling microscope (STM) lithography techniques developed at UNSW."
"In the end, both the problem and its solution directly originate from crystal symmetries, so this is a nice twist."
Using a STM, the team are able to map out the atoms' wave function in 2-D images and identify their exact spatial location in the silicon crystal—first demonstrated in 2014 with research published in Nature Materials and advanced in a 2016 Nature Nanotechnology paper.
In the latest research, the team used the same STM technique to observe atomic-scale details of the interactions between the coupled atom qubits.
"Using our quantum state imaging technique, we could observe for the first time both the anisotropy in the wavefunction and the interference effect directly in the plane—this was the starting point to understand how this problem plays out," says Dr. Voisin.
"We understood that we had to first work out the impact of each of these two ingredients separately, before looking at the full picture to solve the problem—this is how we could find this sweet spot, which is readily compatible with the atomic placement precision offered by our STM lithography technique."
Building a silicon quantum computer atom by atom
UNSW scientists at CQC2T are leading the world in the race to build atom-based quantum computers in silicon. The researchers at CQC2T, and its related commercialisation company SQC, are the only team in the world that have the ability to see the exact position of their qubits in the solid state.
In 2019, the Simmons group reached a major milestone in their precision placement approach—with the team first building the fastest two-qubit gate in silicon by placing two atom qubits close together, and then controllably observing and measuring their spin states in real-time. The research was published in Nature.
Now, with the Rogge team's latest advances, the researchers from CQC2T and SQC are positioned to use these interactions in larger scale systems for scalable processors.
"Being able to observe and precisely place atoms in our silicon chips continues to provide a competitive advantage for fabricating quantum computers in silicon," says Prof. Simmons.
The combined Simmons, Rogge and Rahman teams are working with SQC to build the first useful, commercial quantum computer in silicon. Co-located with CQC2T on the UNSW Sydney campus, SQC's goal is to build the highest quality, most stable quantum processor.
More information: B. Voisin et al, Valley interference and spin exchange at the atomic scale in silicon, Nature Communications (2020). DOI: 10.1038/s41467-020-19835-1
Citation: Hitting the quantum 'sweet spot': Researchers find best position for atom qubits in silicon (2020, November 30) retrieved 1 December 2020 from https://ift.tt/37k10rq
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Article From & Read More ( Hitting the quantum 'sweet spot': Researchers find best position for atom qubits in silicon - Phys.org )
https://ift.tt/3fU3W1P
Science
Researchers at the Niels Bohr Institute, University of Copenhagen, have investigated more than 1000 planetary systems orbiting stars in our own galaxy, the Milky Way, and have discovered a series of connections between planetary orbits, number of planets, occurrence and the distance to their stars. It turns out that our own solar system in some ways is very rare, and in others very ordinary.
It is rare to have eight planets, but the study shows that the solar system follows exactly the same, very basic rules for the formation of planets around a star that they all do. The question about what exactly makes it so special that it harbors life is still a good question. The study is now published in MNRAS
Eccentric planet orbits are the key to determining the number of planets
There is a very clear correlation between the eccentricity of the orbits and the number of planets in any given solar system. When the planets form, they begin in circular orbits in a cloud of gas and dust. But they are still relatively small in size, up to sizes comparable to the moon. On a slightly longer time scale they interact via gravitation and acquire more and more eccentric or elliptic orbits. This means they start colliding because elliptical orbits cross one another—and so the planets grow in size due to the collisions. If the end result of the collisions is that all the pieces become just one or a few planets, then they stay in elliptical orbits. But if they end up becoming many planets, the gravitational pull between them makes them lose energy—and so they form more and more circular orbits.
The researchers have found a very clear correlation between the number of planets and how circular the orbits are. "Actually, this is not really a surprise," professor Uffe Gråe Jørgensen explains. "But our solar system is unique in the sense that no other solar systems with as many planets as ours are known. So perhaps it could be expected that our solar system doesn't fit into the correlation. But it does—as a matter of fact, it is right on."
The only solar systems that don't fit into this rule are systems with only one planet. In some cases, the reason is that in these single-planet systems, the planet is orbiting the star in very close proximity, but in others, the reason is that the systems may actually hold more planets that initially assumed. "In these cases, we believe that the deviation from the rule can help us reveal more planets that were hidden up until now," Nanna Bach-Møller, first author of the scientific article, explains. If we are able to see the extent of eccentricity of the planet orbit, then we know how many other planets must be in the system—and vice versa, if we have the number of planets, we now know their orbits. "This would be a very important tool for detecting planetary systems like our own solar system, because many exoplanets similar to the planets in our solar system would be difficult to detect directly, if we don't know where to look for them."
The Earth is among the lucky 1%
No matter which method is used in the search for exoplanets, one reaches the same result. So, there is basic, universal physics at play. The researchers can use this to say: How many systems possess the same eccentricity as our solar system? – which we can then use to assess how many systems have the same number of planets as our solar system. The answer is that there are only 1% of all solar systems with the same number of planets as our solar system or more. If there are approximately 100 billion stars in the Milky Way, this is, however, still no less than one billion solar systems. There are approximately 10 billion Earth-like planets in the habitable zone, i.e. in a distance from their star allowing for the existence of liquid water. But there is a huge difference between being in the habitable zone and being habitable or having developed a technological civilization, Uffe Gråe Jørgensen stresses. "Something is the cause of the fact that there aren't a huge amount of UFOs out there. When the conquest of the planets in a solar system has begun, it goes pretty quickly. We can see that in our own civilization. We have been to the moon and on Mars we have several robots already. But there aren't a whole lot of UFOs from the billions of Earth-like exo-planets in the habitable zones of the stars, so life and technological civilizations in particular are probably still fairly scarce."
The Earth is not particularly special—the number of planets in the system is what it is all about
What more does it take to harbor life than being an Earth-size planet in the habitable zone? What is really special here on Earth and in our solar system? Earth is not special—there are plenty of Earth-like planets out there. But perhaps it could be the number of planets and the nature of them. There are many large gas planets in our solar system, half of all of them. Could it be that the existence of the large gas planets are the cause of our existence here on Earth? A part of that debate entails the question of whether the large gas planets, Saturn and Jupiter, redirected water-bearing comets to Earth when the planet was a half-billion years old, enabling the forming of life here.
This is the first time a study has shown how unique it is for a solar system to be home to eight planets, but at the same time, shows that our solar system is not entirely unique. Our solar system follows the same physical rules for forming planets as any other solar system, we just happen to be in the unusual end of the scale. And we are still left with the question of why, exactly, we are here to be able to wonder about it.
More information: Nanna Bach-Møller et al. Orbital eccentricity–multiplicity correlation for planetary systems and comparison to the Solar system, Monthly Notices of the Royal Astronomical Society (2020). DOI: 10.1093/mnras/staa3321
Citation: The solar system follows the galactic standard—but it is a rare breed (2020, November 30) retrieved 30 November 2020 from https://ift.tt/2KVGuWH
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
If humans are ever going to visit Mars, they may well need to make some crucial resources while they are there in order to survive long enough to explore and restock for the long return journey. Although the days of flowing surface water are long gone, the red planet is not entirely without the raw ingredients to make this work.
The Mars 2020 mission that launched in July is carrying an experiment with exactly this goal in mind. MOXIE—the Mars Oxygen In-Situ Resource Utilization Experiment—is a box not much bigger than a toaster that produces oxygen from atmospheric CO2. While a much larger version would be required to make liquid-oxygen fuel for a rocket, MOXIE is sized to produce about the amount of oxygen an active person needs to breathe.
A new study led by Pralay Gayen at Washington University in St. Louis, Missouri, tests a device that could tap a different resource—perchlorate brine believed to exist in the Martian ground at some locations. The device can split the water in that brine, producing pure oxygen and hydrogen.
Perchlorate (ClO4) salts, we have discovered, are common on Mars. These salts have an affinity for water molecules and can collect water vapor over time, turning into a brine with a very low freezing temperature. There is evidence of sizable amounts of what could be this brine beneath the surface of Mars' north polar region, and smaller amounts have been invoked as a possible explanation for the active streaks that sometimes appear on Martian slopes.
To test whether we could tap this resource, the researchers built an electrolysis device that they ran in Mars-like conditions. It uses a standard platinum-carbon cathode and a special lead-ruthenium-oxygen anode the researchers developed previously. They mixed up a plausible concentration of magnesium perchlorate brine and filled the headspace in that container with pure CO2 for a Mars-like atmosphere. The whole thing was kept at -36°C (-33°F). When powered up, brine flowed through the device, splitting into pure oxygen gas captured on the anode side and pure hydrogen gas on the cathode side.
The device worked quite well, producing about 25 times as much oxygen as its MOXIE counterpart can manage. MOXIE requires about 300 watts of power to run, and this device matches that oxygen output on about 12 watts. Plus, it also produces hydrogen that could be used in a fuel cell to generate electricity. And it would be smaller and lighter than MOXIE, the researchers say. Ultimately, all this just illustrates that MOXIE is working with a lower quality—but more widely accessible—resource in atmospheric CO2 instead of water.
A device like this would need to go through long-term stress testing, of course, to ensure that performance doesn't degrade over time and it is generally robust. The membrane that separates the cathode and anode sides was operated carefully to prevent the CO2 from fouling it up, for example. If your survival depends on a device you brought to Mars, malfunctions aren't an option.
If you were up in the early hours of this morning, you may have noticed the full moon turning a shade or so darker and redder.
What you were seeing is called a penumbral lunar eclipse. Caused by the moon dipping behind the Earth’s fuzzy penumbra, or outer shadow, this subtle shading effect peaked at 4:32 am ET November 30, when—according to NASA—83% of the moon was in the shadow of our planet.
NASA has also given a list of the names November’s full moon is known by: The Algonquin tribes have long called this the Cold Moon after the long, frozen nights. Others know it as the Frost Moon, while an Old European Name is Oak Moon: perhaps because of ancient Druid traditions that involve harvesting mistletoe from oak trees for the upcoming winter solstice.
In America, the November full moon is perhaps still best known as the Beaver Moon—with Native Americans associating it with a time when the beavers are scrabbling to finish building their dens from mud and sticks and rocks in preparation for winter.
While this was the last penumbral eclipse of the year, don’t worry if you missed the occurrence due to sleep or clouds.
For those who didn’t get to witness the phenomenon in person, from San Francisco to Michigan to the Sydney Opera House, here are some stunning pictures of this year’s last partial lunar eclipse.
Article From & Read More ( A 'Beaver Full Moon' With Lunar Eclipse Happened This Morning—And Folks Took Some Stunning Photos - Good News Network )
https://ift.tt/2JzUPYg
Science
Today, DeepMind announced that it has seemingly solved one of biology's outstanding problems: how the string of amino acids in a protein folds up into a three-dimensional shape that enables their complex functions. It's a computational challenge that has resisted the efforts of many very smart biologists for decades, despite the application of supercomputer-level hardware for these calculations. DeepMind instead trained its system using 128 specialized processors for a couple of weeks; it now returns potential structures within a couple of days.
The limitations of the system aren't yet clear—DeepMind says it's currently planning on a peer-reviewed paper and has only made a blog post and some press releases available. But the system clearly performs better than anything that's come before it, after having more than doubled the performance of the best system in just four years. Even if it's not useful in every circumstance, the advance likely means that the structure of many proteins can now be predicted from nothing more than the DNA sequence of the gene that encodes them, which would mark a major change for biology.
Between the folds
To make proteins, our cells (and those of every other organism) chemically link amino acids to form a chain. This works because every amino acid shares a backbone that can be chemically connected to form a polymer. But each of the 20 amino acids used by life has a distinct set of atoms attached to that backbone. These can be charged or neutral, acidic or basic, etc., and these properties determine how each amino acid interacts with its neighbors and the environment.
The interactions of these amino acids determine the three-dimensional structure that the chain adopts after it's produced. Hydrophobic amino acids end up on the interior of the structure in order to avoid the watery environment. Positive and negatively charged amino acids attract each other. Hydrogen bonds enable the formation of regular spirals or parallel sheets. Collectively, these shape what might otherwise be a disordered chain, enabling it to fold up into an ordered structure. And that ordered structure in turn defines the behavior of the protein, allowing it to act like a catalyst, bind to DNA, or drive the contraction of muscles.
Determining the order of amino acids in the chain of a protein is relatively easy.—they're defined by the order of DNA bases within the gene that encode the protein. And as we've gotten very good at sequencing entire genomes, we have a superabundance of gene sequences and thus a huge surplus of protein sequences available to us now. For many of them, though, we have no idea what the folded protein looks like, which makes it difficult to determine how they function.
Given that the backbone of a protein is very flexible, nearly any two amino acids of a protein could potentially interact with each other. So figuring out which ones actually do interact in the folded protein, and how that interaction minimizes the free energy of the final configuration, becomes an intractable computational challenge once the number of amino acids gets too large. Essentially, when any amino acid could occupy any potential coordinates in a 3D space, figuring out what to put where becomes difficult.
Levinthal's Paradox
In a small way, this story overlaps with my education. While an undergrad, I was taught biology by Cyrus Levinthal, whose name will forever be associated with the paradox he identified. Levinthal noted that the chemical bonds of proteins give them tremendous freedom to adopt countless configurations—he estimated that a typical protein could exist in up to 10300 configurations. Yet, once made, most proteins seem to adopt their final configurations in less than a second.
Levinthal's Paradox is named for the apparent contradiction here: nobody is guiding the folding, so it should happen through randomly sampling configurations. But in the real world, it folds much too quickly to have done so. Levinthal himself had ideas about how this apparent paradox was likely resolved in biology, and some of those have been confirmed and elaborated. But he missed out on the progress of computational structure predictions, having died in 1990.
Despite the difficulties, there has been some progress, including through distributed computing and gamification of folding. But an ongoing, biannual event called the Critical Assessment of protein Structure Prediction (CASP) has seen pretty irregular progress throughout its existence. And in the absence of a successful algorithm, people are left with the arduous task of purifying the protein and then using X-ray diffraction or cryo electron microscopy to figure out the structure of the purified form, endeavors that can often take years.
DeepMind enters the fray
DeepMind is an AI company that was acquired by Google in 2014. Since then, it's made a number of splashes, developing systems that have successfully taken on humans at Go, chess, and even StarCraft. In several of its notable successes, the system was trained simply by providing it a game's rules before setting it loose to play itself.
Tthe system is incredibly powerful, but it wasn't clear that it would work for protein folding. For one thing, there's no obvious external standard for a "win"—if you get a structure with a very low free energy, that doesn't guarantee there's something slightly lower out there. There's also not much in the way of rules. Yes, amino acids with opposite charges will lower the free energy if they're next to each other. But that won't happen if it comes at the cost of dozens of hydrogen bonds and hydrophobic amino acids sticking out into water.
So how do you adapt an AI to work under these conditions? For their new algorithm, called AlphaFold, the DeepMind team treated the protein as a spatial network graph, with each amino acid as a node and the connections between them mediated by their proximity in the folded protein. The AI itself is then trained on the task of figuring out the configuration and strength of these connections by feeding it the previously determined structures of over 170,000 proteins obtained from a public database.
When given a new protein, AlphaFold searches for any proteins with a related sequence, and aligns the related portions of the sequences. It also searches for proteins with known structures that also have regions of similarity. Typically, these approaches are great at optimizing local features of the structure but not so great at predicting the overall protein structure—smooshing a bunch of highly optimized pieces together doesn't necessarily produce an optimal whole. And this is where an attention-based deep-learning portion of the algorithm was used to make sure that the overall structure was coherent.
A clear success, but with limits
For this year's CASP, AlphaFold and algorithms from other entrants were set loose on a series of proteins that were either not yet solved (and solved as the challenge went on) or were solved but not yet published. So there was no way for the algorithms' creators to prep the systems with real-world information, and the algorithms' output could be compared to the best real-world data as part of the challenge.
AlphaFold did quite well—far better, in fact, than any other entry. For about two-thirds of the proteins it predicted a structure for, it was within the experimental error that you'd get if you tried to replicate the structural studies in a lab. Overall, on an evaluation of accuracy that ranges from zero to 100, it averaged a score of 92—again, the sort of range that you'd see if you tried to obtain the structure twice under two different conditions.
By any reasonable standard, the computational challenge of figuring out a protein's structure has been solved.
Unfortunately, there are a lot of unreasonable proteins out there. Some immediately get stuck into the membrane; others quickly pick up chemical modifications. Still others require extensive interactions with specialized enzymes that burn energy in order to force other proteins to refold. In all likelihood, AlphaFold will not be able to handle all of these edge cases, and without an academic paper describing the system, the system will take a little while—and some real-world use—to figure out its limitations. That's not to take away from an incredible achievement, just to warn against unreasonable expectations.
The key question now is how quickly the system will be made available to the biological research community so that its limitations can be defined and we can start putting it to use on cases where it's likely to work well and have significant value, like the structure of proteins from pathogens or the mutated forms found in cancerous cells.
Engineers are racing to fix a failed piece of equipment on NASA’s future deep-space crew capsule Orion ahead of its first flight to space. It may require months of work to replace and fix. Right now, engineers at NASA and Orion’s primary contractor, Lockheed Martin, are trying to figure out the best way to fix the component and how much time the repairs are going to take.
In early November, engineers at Lockheed Martin working on Orion noticed that a power component inside the vehicle had failed, according to an internal email and an internal PowerPoint presentation seen by The Verge. The component is within one of the spacecraft’s eight power and data units, or PDUs. The PDUs are the “main power/data boxes,” for Orion according to the email, responsible for activating key systems that Orion needs during flight.
Orion is a critical part of NASA’s Artemis program, which aims to send the first woman and the next man to the Moon by 2024. The cone-shaped capsule is designed to launch on top of a future rocket called the Space Launch System, or SLS, a vehicle that NASA has been building for the last decade. To test out both of these systems’ capabilities, NASA plans to launch an uncrewed Orion capsule on top of the SLS on the rocket’s first flight in late 2021 — a mission called Artemis I.
“While the PDU is still fully operational without this redundant channel we are swiftly trouble shooting the card while also continuing close-out activities on Orion,” a representative for Lockheed Martin said in a statement to The Verge. “We are fully committed to seeing Orion launch next year on its historic Artemis I mission to the Moon.
Replacing the PDU isn’t easy. The component is difficult to reach: it’s located inside an adapter that connects Orion to its service module — a cylindrical trunk that provides support, propulsion, and power for the capsule during its trip through space. To get to the PDU, Lockheed Martin could remove the Orion crew capsule from its service module, but it’s a lengthy process that could take up to a year. As many as nine months would be needed to take the vehicle apart and put it back together again, in addition tothree months for subsequent testing, according to the presentation.
Lockheed has another option, but it’s never been done before and may carry extra risks, Lockheed Martin engineers acknowledge in their presentation. To do it, engineers would have to tunnel through the adapter’s exterior by removing some of the outer panels of the adapter to get to the PDU. The panels weren’t designed to be removed this way, but this scenario may only take up to four months to complete if engineers figure out a way to do it.
A third option is that Lockheed Martin and NASA could fly the Orion capsule as is. The PDU failed in such a way that it lost redundancy within the unit, so it can still function. But at a risk-averse agency like NASA, flying a vehicle without a backup plan is not exactly an attractive option. It’s still not clear what went wrong inside the unit, which was tested before it was installed on the spacecraft, according to a person familiar with the matter.
If engineers choose to remove Orion from its service module, the capsule’s first flight on the SLS may be delayed past its current date of November 2021. But the SLS has experienced its own set of delays: it was supposed to fly for the first time in 2017 but hasn’t done so yet. It’s not clear if the SLS itself will make the November 2021 flight date either; a key test of the rocket coming up at the end of the year has been pushed back, with no new target date set. So it’s possible that Lockheed Martin and NASA can fix Orion before the SLS is ready to fly.
Any further delays to Artemis I add uncertainty to NASA’s lunar landing timeline. NASA is hoping to land astronauts on the Moon by 2024, though many experts are skeptical that such a mission can be pulled off in time. Artemis I is vulnerable to other possible delays, but the component failure adds one more level of uncertainty to when the Orion and SLS combo will get off the ground.
Update November 30th, 7:00PM ET:This story has been updated to include information from a statement from Lockheed Martin.
A former NASA astronaut now employed by Axiom Space says that SpaceX’s private astronaut launch debut will reuse the same Crew Dragon spacecraft currently supporting NASA’s Crew-1 mission in orbit.
Currently just a few weeks into a planned six-month stint in orbit, potentially marking the longest uninterrupted flight of an American spacecraft ever, Crew Dragon capsule C207 and an expendable trunk section arrived at the International Space Station (ISS) on November 16th. Known as Crew-1, the mission represents SpaceX and NASA’s commercial operational astronaut launch debut, carrying four astronauts to the ISS.
Crucially, the mission has been an almost flawless success so far and Falcon 9 has now completed four Crew Dragon launches without issue. On the Dragon side of things, the Crew-1 spacecraft performed a bit less perfectly than those tasked with flying Demo-1 and Demo-2, but SpaceX handled the minor issues that arose with the professionalism and composure of a team far more familiar with human spaceflight.
Early success aside, there is still some definite uncertainty ahead of Crew Dragon. While several Russian spacecraft have decades of experience spending at least several months at a time in orbit, a crewed US spacecraft has never spent more than 84 days in orbit. SpaceX itself actually beat out NASA to secure the second-place record with Crew Dragon’s 63-day Demo-2 astronaut launch debut, completed with a successful reentry and splashdown on August 2nd.
However, Crew-1 is expected to more than double that previous US record and almost triple SpaceX’s own second-place record, spending roughly 180 days (six months, give or take) in orbit. Barring an unprecedented space station or spacecraft emergency, Crew Dragon C207 will undock from ISS, reenter Earth’s atmosphere, and splashdown in the Gulf of Mexico or the Atlantic Ocean sometime in May 2021. Of course, as the first recoverable US spacecraft to spend anywhere close to that long in orbit, the Crew-1 Crew Dragon will be closely monitored to ensure the safety and reliability of its intricate reentry and recovery systems after some six months exposed to the extremes of space.
Still, success is by far the likeliest outcome. When Crew Dragon C207 splashes down, its four astronaut passengers will be carefully extricated and the inspection and refurbishment process will begin almost immediately thereafter. Crew-1 will technically be the second Crew Dragon spacecraft to be refurbished after an orbital spaceflight, following Demo-2 capsule C206’s inaugural Dragon 2 reuse perhaps just a month or two prior.
The Demo-2 Crew Dragon capsule is currently scheduled to fly a second time as early as March 31st, 2021 on SpaceX’s Crew-2 mission, ferrying another four astronauts to the ISS. If successful, Crew-2 will represent the first commercial astronaut launch ever to reuse both an orbital-class rocket booster and an orbital spacecraft, and the NASA-overseen process of refurbishment and re-flight will thus pave the way for future flight-proven astronaut launches. That includes private company Axiom Space’s first private AX-1 astronaut launch, which is currently scheduled to launch as early as Q4 2021.
AX-1 will be captained by former NASA astronaut Mike Lopez-Algeria and carry three other private astronauts, including Israeli multimillionaire Eytan Stibbe. SpaceX will thus be tasked with launching Israel’s second astronaut ever after Ilan Ramon was killed when a heat shield design flaw caused NASA Space Shuttle Columbia to break up during reentry in 2003.
SpaceX private astronaut launch debut to reuse Crew-1 Dragon spacecraft
A former NASA astronaut now employed by Axiom Space says that SpaceX’s private astronaut launch debut will reuse the same Crew Dragon spacecraft currently supporting NASA’s Crew-1 mission in orbit.
Currently just a few weeks into a planned six-month stint in orbit, potentially marking the longest uninterrupted flight of an American spacecraft ever, Crew Dragon capsule C207 and an expendable trunk section arrived at the International Space Station (ISS) on November 16th. Known as Crew-1, the mission represents SpaceX and NASA’s commercial operational astronaut launch debut, carrying four astronauts to the ISS.
Crucially, the mission has been an almost flawless success so far and Falcon 9 has now completed four Crew Dragon launches without issue. On the Dragon side of things, the Crew-1 spacecraft performed a bit less perfectly than those tasked with flying Demo-1 and Demo-2, but SpaceX handled the minor issues that arose with the professionalism and composure of a team far more familiar with human spaceflight.
Early success aside, there is still some definite uncertainty ahead of Crew Dragon. While several Russian spacecraft have decades of experience spending at least several months at a time in orbit, a crewed US spacecraft has never spent more than 84 days in orbit. SpaceX itself actually beat out NASA to secure the second-place record with Crew Dragon’s 63-day Demo-2 astronaut launch debut, completed with a successful reentry and splashdown on August 2nd.
However, Crew-1 is expected to more than double that previous US record and almost triple SpaceX’s own second-place record, spending roughly 180 days (six months, give or take) in orbit. Barring an unprecedented space station or spacecraft emergency, Crew Dragon C207 will undock from ISS, reenter Earth’s atmosphere, and splashdown in the Gulf of Mexico or the Atlantic Ocean sometime in May 2021. Of course, as the first recoverable US spacecraft to spend anywhere close to that long in orbit, the Crew-1 Crew Dragon will be closely monitored to ensure the safety and reliability of its intricate reentry and recovery systems after some six months exposed to the extremes of space.
Still, success is by far the likeliest outcome. When Crew Dragon C207 splashes down, its four astronaut passengers will be carefully extricated and the inspection and refurbishment process will begin almost immediately thereafter. Crew-1 will technically be the second Crew Dragon spacecraft to be refurbished after an orbital spaceflight, following Demo-2 capsule C206’s inaugural Dragon 2 reuse perhaps just a month or two prior.
The Demo-2 Crew Dragon capsule is currently scheduled to fly a second time as early as March 31st, 2021 on SpaceX’s Crew-2 mission, ferrying another four astronauts to the ISS. If successful, Crew-2 will represent the first commercial astronaut launch ever to reuse both an orbital-class rocket booster and an orbital spacecraft, and the NASA-overseen process of refurbishment and re-flight will thus pave the way for future flight-proven astronaut launches. That includes private company Axiom Space’s first private AX-1 astronaut launch, which is currently scheduled to launch as early as Q4 2021.
AX-1 will be captained by former NASA astronaut Mike Lopez-Algeria and carry three other private astronauts, including Israeli multimillionaire Eytan Stibbe. SpaceX will thus be tasked with launching Israel’s second astronaut ever after Ilan Ramon was killed when a heat shield design flaw caused NASA Space Shuttle Columbia to break up during reentry in 2003.
SpaceX private astronaut launch debut to reuse Crew-1 Dragon spacecraft
The European Space Agency has signed a historic deal with Swiss startup ClearSpace to remove a single item of space debris in 2025. The $103 million price tag is steep, but this mission—involving an orbiting, mouth-like net—could herald the beginning of an entirely new space industry.
Advertisement
The new contract, announced late last week, is unique in that the mission will involve “the first removal of an item of space debris from orbit,” according to ESA. ClearSpace, a spin-off of the Ecole Polytechnique Federale de Lausanne (EPFL), is the commercial provider for this mission, and it will seek the help of partners in Germany, the Czech Republic, Sweden, Poland, and several other European countries.
The target in question is the Vega Secondary Payload Adapter (or Vespa), which has been circling in low Earth orbit (LEO) since 2013. This 247-pound (112-kilogram) payload adapter successfully dispatched a Proba-V satellite to space, but, like so many other items in LEO, it currently serves no purpose, presenting a potential hazard to functioning satellites and possibly even the International Space Station.
Advertisement
€86 million (USD $103 million) seems like an awful lot of money to spend on the removal of a single item of space debris, but ESA is making an important investment. The technology required for the ClearSpace-1 mission, in which a spacecraft will “rendezvous, capture, and bring down” the Vespa payload adapter, will likely be leveraged in similar future missions (assuming this particular strategy will work). Ultimately, ESA is hoping to launch “a new commercial sector in space.”
G/O Media may get a commission
The ClearSpace solution will involve a spacecraft and conical net that will “eat” the Vespa payload adapter. This will require unimaginable precision, as the objects will be traveling at speeds reaching 17,400 miles per hour (28,000 km/hr). Slight miscalculations could make the target object bounce out before the net can close or even cause a serious collision. With its cargo secured, the ClearSpace spacecraft will fall into Earth’s atmosphere and burn up on re-entry.
According to ESA, the number of debris objects currently being tracked is now at about 22,300. With each added item, the chance of a collision increases, making LEO a dangerous place for satellites and astronauts. Removing this debris “has become necessary and is our responsibility to ensure that tomorrow’s generations can continue benefiting from space infrastructures and exploration,” according to to ClearSpace, adding that ClearSpace-1 will “demonstrate the technical ability and commercial capacity to significantly enhance the long-term sustainability of spaceflight.”
Advertisement
ClearSpace has its conical net, but several other companies are developing their own concepts. RemoveDEBRIS, for example, uses a harpoon to snatch wayward objects in orbit. Only time will tell which strategy works best, but it’s becoming increasingly clear that solutions are coming. The time has come for us to clean up our mess.
Article From & Read More ( Why Europe's Space Agency Is Spending $103 Million to Remove a Single Piece of Space Junk - Gizmodo )
https://ift.tt/2JvFesm
Science
Space is closer than you might think -- about 62 miles up, only a little farther away from you than San Jose is from San Francisco. Heck, you can get halfway to space in a balloon.
The hardest part about space, it turns out, isn't so much getting there as staying there. That's where the idea of orbiting comes into play. Once you accomplish the hard work of getting a spacecraft into orbit, you can get years of use out of it as it loops more or less effortlessly around the planet on its own invisible track.
From the lab to your inbox. Get the latest science stories from CNET every week.
Scientists figured out how orbits work centuries before humans could launch spacecraft, but there's lots for the rest of us to learn about these looping tracks above the Earth -- and good reason to learn it. With new government and private sector projects, space stands to become even more important than it was during the 1960s at the start of the Space Age.
"It's the new Space Age -- and the new space race," said Ben Lamm, chief executive of software company Hypergiant. His company is working with the US Air Force on its Chameleon spacecraft, designed to be more adaptable, more independent and smarter than traditional spacecraft.
Let's start with Isaac Newton
If you want to understand orbits, a great place to start is Isaac Newton, whose research paved the way to modern science with explanations of motion, light and gravity. Newton's Treatise of the System of the Worldfrom 1685 elegantly encapsulates how orbits work with a thought experiment that requires no calculus whatsoever.
"The greater the velocity is with which it is projected, the farther it goes before it falls to Earth," Newton said. With increasing horizontal velocity, "it would describe an arc of 1, 2, 5, 10, 100, 1,000 miles before it arrived at the Earth, till at last exceeding the limits of the Earth, it should pass quite by without touching it."
In other words, the stone would fall at exactly the same rate that the Earth's surface receded because of the Earth's curvature. In Newton's experiment, a stone shot with the right speed would circle the Earth and smack right back into the mountain.
In the real world, friction with the Earth's atmosphere would slow the projectile long before it could circle the Earth and return to the mountain. But a few miles up into space, where air is scarce, that projectile would keep on orbiting with almost nothing to stop it.
Traveling fast sideways, not up
That brings us to the main difficulty of putting a satellite into orbit: getting enough horizontal velocity.
Whether you're watching enormous Saturn V rockets carrying humans to the moon or slender candlesticks launching smaller spacecraft, the rockets you see produce immense amounts of thrust. The vast majority of rocket fuel, though, propels the spacecraft laterally, not up. When you watch a rocket launch, the tilt toward the horizontal begins almost immediately after the craft leaves the launchpad.
It takes a lot more power for SpaceX to carry NASA astronauts to the ISS than it does for Blue Origin, the rocketry startup funded by Amazon Chief Executive Jeff Bezos, to pop its New Shepard rockets up and down without entering orbit.
The lower a spacecraft orbits, the faster it goes. That's why the Hubble Space Telescope, about 340 miles up (547km), circles the Earth every 95 minutes, but Global Positioning System satellites for navigation services, at 12,550 miles (20,200 km) up, take 12 hours for each orbit.
Getting a launch boost from Earth
The Earth's rotation gives rockets a healthy eastward fling, and the closer to the equator a launch is, the bigger the fling.
That's in part why US launch sites are located toward the southern parts of the country and why European spacecraft sometimes are launched from the Guiana Space Center in South America, just 5 degrees of latitude away from the equator. NASA considered launching moon missions from an equatorial site -- though the fling factor was secondary to fuel considerations matching the moon's orbit.
When SpaceX launches a rocket, it reserves some fuel to return the first stage of the rocket to Earth after its job getting a spacecraft into orbit is done. For launches from Cape Canaveral in Florida, the rocket stage lands on a drone ship floating on the Atlantic hundreds of miles to the east.
Low Earth orbit: Join the party
Space starts about 62 miles (100km) above us, though the boundary is somewhat arbitrary. A bit higher than that, reaching up to about 1,243 miles (2,000 km) above the Earth's surface, is the most popular part of space, called low Earth orbit, or LEO.
This is where you'll find the International Space Station along with satellites for weather forecasting, spying, television, imaging and, increasingly, satellite-based broadband. Every human who's been in space, aside from a few who made it to the moon's vicinity during NASA's Apollo missions, have hugged the earth in LEO.
It's easier than ever to get to LEO, and that's triggered "a golden age of LEO innovation," said HawkEye 360 Chief Executive John Serafini, whose company helps government and military customers track radio signals to spot subjects like smugglers or lost boats.
"It would have been almost impossible for HawkEye 360 to build out a constellation of satellites 10 years ago," but SpaceX's reusable rockets and other improvements have lowered launch costs. "There are more opportunities to catch rides to orbit than ever before," he said.
Because LEO is relatively accessible, though, it's also where most of the Earth's space junk orbits. Friction with the upper fringes of the atmosphere drags a fraction of the detritus out of the way. Satellites must reckon with atmospheric friction, too, often nudging themselves to maintain proper orbit with gentle but conveniently solar-powered ion thrusters.
Heading higher to geosynchronous orbit
Medium Earth orbit, which reaches up to about 22,233 miles (35,780 km) above Earth, is a desert compared with LEO. But there are some notable denizens of this zone, in particular navigation satellite constellations.
The big sat-nav constellations, each with roughly 24 satellites, are the United States' GPS, Europe's Galileo, Russia's Glonass and China's BeiDou. GPS is handy for smartphone navigation, but military use is also a top justification for the expense of launching and maintaining these satellites.
Just above the upper boundary of MEO is geosynchronous orbit, a sweet spot where the orbital period matches the Earth's rotation. A satellite in geosynchronous orbit above the equator, called geostationary orbit, appears in the exact same spot in the sky as viewed from Earth.
That's particularly useful for communications because you can point a fixed ground station antenna directly at the satellite. However, radio transmission delays and signal strength are worse than with spacecraft in lower orbits.
Not all parking places in geosynchronous are created equal. Variations in the Earth's density nudge some satellites out of their spot, requiring occasional propulsion to keep them in line, Drexel's Yousuff said.
Circles and ellipses
Although many orbits are circular, some are elongated into more elliptical shapes that can slow a satellite's speed when it's farther away from the Earth.
Ellipses also are handy for changing orbits. NASA's Apollo missions began by launching the spacecraft into Earth's orbit, then a new rocket burn launched them into an elliptical orbit that stretched toward the moon, letting the astronauts coast most of the way. Another rocket burn inserted the spacecraft into lunar orbit.
One of Yousuff's favorite orbit types is elliptical. Most of Russia is well north of the equator, which limits geostationary satellites' usefulness. So the Russians came up with an alternative called the Molniya orbit.
With the Molniya orbit, a satellite whips over Australia at its closest point in orbit, called perigee, then naturally slows as it reaches its highest point above Moscow, called apogee. That way it spends much of its orbiting time usefully accessible.
There are plenty of other orbit types, too, like polar orbits that cross over both of the Earth's poles. And spacecraft that reach Earth's escape velocity can orbit the sun instead. The orbit of SpaceX's Starman just carried Elon Musk's publicity stunt close to Mars, for example. If today's commercial activity in low Earth orbit keeps lowering rocket launch costs, perhaps actual humans will follow him.
Now playing:Watch this: Starlink space-based internet, explained