Science & Technology

CAPE CANAVERAL: Nasa astronauts took another spacewalk outside the International Space Station on Tuesday, this time to grease the robot arm’s new hand.

Commander Randy Bresnik ventured out for the second time in less than a week, along with Mark Vande Hei.

The pair replaced the latching mechanism on one end of the 58-foot robot arm on Thursday. The mechanism malfunctioned in August.

Tuesday’s work involved using a grease gun, which resembles a caulking gun, to keep the latching mechanism working smoothly. The two-part lube job is expected to spill into next week, in a third spacewalk.

 

These latches, or hands, are located on each end of the Canadian-built robot arm. They’re used to grab arriving US cargo ships and also allow the robot arm to move around the orbiting lab.

Launched in 2001 with the rest of the robot arm, the original latches were showing their age. Nasa plans to replace the latching mechanism on the opposite end of the arm early next year.

Published in Dawn, October 11th, 2017

A representation of the evolution of the universe over 13.8 billion years. NASA and the WMAP consortium

Different methods of studying cosmic expansion yield slightly different results, including for the age of the universe. In a new study, astronomers from the Harvard-Smithsonian Center For Astrophysics have calculated that these discrepancies could be reconciled if the dark energy that drives cosmic acceleration were not constant in time.

The universe is not only expanding – it is accelerating outward, driven by what is commonly referred to as “dark energy.” The term is a poetic analogy to label for dark matter, the mysterious material that dominates the matter in the universe and that really is dark because it does not radiate light (it reveals itself via its gravitational influence on galaxies). Two explanations are commonly advanced to explain dark energy. The first, as Einstein once speculated, is that gravity itself causes objects to repel one another when they are far enough apart (he added this “cosmological constant” term to his equations). The second explanation hypothesizes (based on our current understanding of elementary particle physics) that the vacuum has properties that provide energy to the cosmos for expansion.

For several decades cosmologies have successfully used a relativistic equation with dark matter and dark energy to explain increasingly precise observations about the cosmic microwave background, the cosmological distribution of galaxies, and other large-scale cosmic features. But as the observations have improved, some apparent discrepancies have emerged. One of the most notable is the age of the universe: there is an almost 10% difference between measurements inferred from the Planck satellite data and those from so-called Baryon Acoustic Oscillation experiments. The former relies on far-infrared and submillimeter measurements of the cosmic microwave background and the latter on spatial distribution of visible galaxies.

CfA astronomer Daniel Eisenstein was a member of a large consortium of scientists who suggest that most of the difference between these two methods, which sample different components of the cosmic fabric, could be reconciled if the dark energy were not constant in time. The scientists apply sophisticated statistical techniques to the relevant cosmological datasets and conclude that if the dark energy term varied slightly as the universe expanded (though still subject to other constraints), it could explain the discrepancy. Direct evidence for such a variation would be a dramatic breakthrough, but so far has not been obtained. One of the team’s major new experiments, the Dark Energy Spectroscopic Instrument (DESI) Survey, could settle the matter. It will map over twenty-five million galaxies in the universe, reaching back to objects only a few billion years after the big bang, and should be completed sometime in the mid 2020’s.

Publication: Gong-Bo Zhao, et al., “Dynamical Dark Energy in Light of the Latest Observations,” Nature Astronomy 1, 627–632 (2017) doi:10.1038/s41550-017-0216-z

Source: Harvard-Smithsonian Center For Astrophysics

Engineers at MIT have taken a first step in designing a computer chip that uses a fraction of the power of larger drone computers and is tailored for a drone as small as a bottlecap.

A team of engineers at MIT has developed a method for designing efficient computer chips may get miniature smart drones off the ground.

In recent years, engineers have worked to shrink drone technology, building flying prototypes that are the size of a bumblebee and loaded with even tinier sensors and cameras. Thus far, they have managed to miniaturize almost every part of a drone, except for the brains of the entire operation — the computer chip.

Standard computer chips for quadcoptors and other similarly sized drones process an enormous amount of streaming data from cameras and sensors, and interpret that data on the fly to autonomously direct a drone’s pitch, speed, and trajectory. To do so, these computers use between 10 and 30 watts of power, supplied by batteries that would weigh down a much smaller, bee-sized drone.

Now, engineers at MIT have taken a first step in designing a computer chip that uses a fraction of the power of larger drone computers and is tailored for a drone as small as a bottlecap. They will present a new methodology and design, which they call “Navion,” at the Robotics: Science and Systems conference, held this week at MIT.

The team, led by Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics at MIT, and Vivienne Sze, an associate professor in MIT’s Department of Electrical Engineering and Computer Science, developed a low-power algorithm, in tandem with pared-down hardware, to create a specialized computer chip.

The key contribution of their work is a new approach for designing the chip hardware and the algorithms that run on the chip. “Traditionally, an algorithm is designed, and you throw it over to a hardware person to figure out how to map the algorithm to hardware,” Sze says. “But we found by designing the hardware and algorithms together, we can achieve more substantial power savings.”

“We are finding that this new approach to programming robots, which involves thinking about hardware and algorithms jointly, is key to scaling them down,” Karaman says.

The new chip processes streaming images at 20 frames per second and automatically carries out commands to adjust a drone’s orientation in space. The streamlined chip performs all these computations while using just below 2 watts of power — making it an order of magnitude more efficient than current drone-embedded chips.

Karaman, says the team’s design is the first step toward engineering “the smallest intelligent drone that can fly on its own.” He ultimately envisions disaster-response and search-and-rescue missions in which insect-sized drones flit in and out of tight spaces to examine a collapsed structure or look for trapped individuals. Karaman also foresees novel uses in consumer electronics.

“Imagine buying a bottlecap-sized drone that can integrate with your phone, and you can take it out and fit it in your palm,” he says. “If you lift your hand up a little, it would sense that, and start to fly around and film you. Then you open your hand again and it would land on your palm, and you could upload that video to your phone and share it with others.”

Karaman and Sze’s co-authors are graduate students Zhengdong Zhang and Amr Suleiman, and research scientist Luca Carlone.

From the ground up

Current minidrone prototypes are small enough to fit on a person’s fingertip and are extremely light, requiring only 1 watt of power to lift off from the ground. Their accompanying cameras and sensors use up an additional half a watt to operate.

“The missing piece is the computers — we can’t fit them in terms of size and power,” Karaman says. “We need to miniaturize the computers and make them low power.”

The group quickly realized that conventional chip design techniques would likely not produce a chip that was small enough and provided the required processing power to intelligently fly a small autonomous drone.

“As transistors have gotten smaller, there have been improvements in efficiency and speed, but that’s slowing down, and now we have to come up with specialized hardware to get improvements in efficiency,” Sze says.

The researchers decided to build a specialized chip from the ground up, developing algorithms to process data, and hardware to carry out that data-processing, in tandem.

Tweaking a formula

Specifically, the researchers made slight changes to an existing algorithm commonly used to determine a drone’s “ego-motion,” or awareness of its position in space. They then implemented various versions of the algorithm on a field-programmable gate array (FPGA), a very simple programmable chip. To formalize this process, they developed a method called iterative splitting co-design that could strike the right balance of achieving accuracy while reducing the power consumption and the number of gates.

A typical FPGA consists of hundreds of thousands of disconnected gates, which researchers can connect in desired patterns to create specialized computing elements. Reducing the number gates with co-design allowed the team to chose an FPGA chip with fewer gates, leading to substantial power savings.

“If we don’t need a certain logic or memory process, we don’t use them, and that saves a lot of power,” Karaman explains.

Each time the researchers tweaked the ego-motion algorithm, they mapped the version onto the FPGA’s gates and connected the chip to a circuit board. They then fed the chip data from a standard drone dataset — an accumulation of streaming images and accelerometer measurements from previous drone-flying experiments that had been carried out by others and made available to the robotics community.

“These experiments are also done in a motion-capture room, so you know exactly where the drone is, and we use all this information after the fact,” Karaman says.

Memory savings

For each version of the algorithm that was implemented on the FPGA chip, the researchers observed the amount of power that the chip consumed as it processed the incoming data and estimated its resulting position in space.

The team’s most efficient design processed images at 20 frames per second and accurately estimated the drone’s orientation in space, while consuming less than 2 watts of power.

The power savings came partly from modifications to the amount of memory stored in the chip. Sze and her colleagues found that they were able to shrink the amount of data that the algorithm needed to process, while still achieving the same outcome. As a result, the chip itself was able to store less data and consume less power.

“Memory is really expensive in terms of power,” Sze says. “Since we do on-the-fly computing, as soon as we receive any data on the chip, we try to do as much processing as possible so we can throw it out right away, which enables us to keep a very small amount of memory on the chip without accessing off-chip memory, which is much more expensive.”

In this way, the team was able to reduce the chip’s memory storage to 2 megabytes without using off-chip memory, compared to a typical embedded computer chip for drones, which uses off-chip memory on the order of a few gigabytes.

“Any which way you can reduce the power so you can reduce battery size or extend battery life, the better,” Sze says.

This summer, the team will mount the FPGA chip onto a drone to test its performance in flight. Ultimately, the team plans to implement the optimized algorithm on an application-specific integrated circuit, or ASIC, a more specialized hardware platform that allows engineers to design specific types of gates, directly onto the chip.

“We think we can get this down to just a few hundred milliwatts,” Karaman says. “With this platform, we can do all kinds of optimizations, which allows tremendous power savings.”

This research was supported, in part, by Air Force Office of Scientific Research and the National Science Foundation.

Publication:

Source: Jennifer Chu, MIT News

Because the system is built on a flexible polymer film, it could be adapted for devices with complex curvature or with moving surfaces.

A team of researchers has made the first demonstration of a solid state cooling device based on the electrocaloric effect. This thin flexible device that could keep smartphones and laptop computers cool and prevent overheating.

Engineers and scientists from the UCLA Henry Samueli School of Engineering and Applied Science and SRI International, a nonprofit research and development organization based in Menlo Park, California, have created a thin flexible device that could keep smartphones and laptop computers cool and prevent overheating.

The system’s flexibility also means it could eventually be used in wearable electronics, robotic systems and new types of personalized cooling systems. It is the first demonstration of a solid state cooling device based on the electrocaloric effect — a phenomenon in which a material’s temperature changes when an electric field is applied to it. The research was published September 15 in Science.

The method devised by UCLA and SRI researchers is very energy-efficient. It uses a thin polymer film that transfers heat from the heat source (a battery or processor, typically) to a “heat sink,” and alternates contact between the two by switching on and off the electric voltage. Because the polymer film is flexible, the system could be adapted for devices with complex curvature or with moving surfaces.

“We were motivated by the idea of devising a personalized cooling system,” said Qibing Pei, UCLA a professor of materials science and engineering and the study’s principal investigator. “For example, an active cooling pad could keep a person comfortable in a hot office and thus lower the electricity consumption for building air conditioning. Or it could be placed in a shoe insole or in a hat to keep a runner comfortable in the hot Southern California sun. It’s like a personal air conditioner.”

A major application could be in mobile and wearable electronics. As most smartphone and tablet users know, devices tend to heat up when they are used, particularly with power-intensive applications like video streaming. So although the devices are made with interior metal radiators designed to pull heat away from the battery and computer processors, they can still overheat, which can even cause them to shut down. And excessive heat can damage the devices’ components over time.

That tendency to overheat remains a major challenge for engineers, and with the anticipated introduction of more flexible electronic devices, it’s an issue that researchers and device manufacturers are working hard to address. The cooling systems in larger devices like air conditioners and refrigerators, which use a process called vapor compression, are simply too large for mobile electronics. (They’re also impractical for smartphones and wearable technology because they use a chemical coolant that is an environmental hazard.)

“The development of practical efficient cooling systems that do not use chemical coolants that are potent greenhouse gases is becoming even more important as developing nations increase their use of air conditioning,” said Roy Kornbluh, an SRI research engineer.

The UCLA–SRI system also has certain advantages over another advanced type of cooling system, called thermoelectric coolers, which require expensive ceramic materials and whose cooling capabilities don’t yet measure up to vapor compression systems.

Pei said the invention’s other potential applications could include being used in a flexible pad for treating injuries, or reducing thermal “noise” in thermographic cameras, which are used by scientists and firefighters, and in night-vision devices, among other uses.

The study’s lead authors are UCLA postdoctoral scholar Rujun Ma and doctoral student Ziyang Zhang, both members of Pei’s research group. Other authors are Kwing Tong, a UCLA graduate student; David Huber, a research engineer at SRI; and Yongho Sungtaek Ju, a UCLA professor of mechanical and aerospace engineering.

The research was supported by the Department of Energy’s Advanced Research Projects Agency–Energy and by the Air Force Office of Scientific Research. The researchers have submitted a U.S. patent application for the device.

Publication: Rujun Ma, et al., “Highly efficient electrocaloric cooling with electrostatic actuation,” Science 15 Sep 2017: Vol. 357, Issue 6356, pp. 1130-1134; DOI: 10.1126/science.aan5980

Source: Matthew Chin, UCLA News

“There are very few specific or targeted inhibitors that are used in the treatment of brain cancer. There’s really a dire need for new therapies and new ideas,” says MIT Associate Professor Michael Hemann. The background of this image shows nanoparticles (red) being taken up in the brain with glioblastoma (in green). Nuclear DNA is in blue; tumor-associated macrophages in white. Image: National Cancer Institute/Yale Cancer Center.

By cutting off a process that cancerous cells rely, researchers from MIT have identified a possible new strategy for halting brain tumors.

MIT biologists have discovered a fundamental mechanism that helps brain tumors called glioblastomas grow aggressively. After blocking this mechanism in mice, the researchers were able to halt tumor growth.

The researchers also identified a genetic marker that could be used to predict which patients would most likely benefit from this type of treatment. Glioblastoma is usually treated with radiation and the chemotherapy drug temozolamide, which may extend patients’ lifespans but in most cases do not offer a cure.

“There are very few specific or targeted inhibitors that are used in the treatment of brain cancer. There’s really a dire need for new therapies and new ideas,” says Michael Hemann, an associate professor of biology at MIT, a member of MIT’s Koch Institute for Integrative Cancer Research, and a senior author of the study.

Drugs that block a key protein involved in the newly discovered process already exist, and at least one is in clinical trials to treat cancer. However, most of these inhibitors do not cross the blood-brain barrier, which separates the brain from circulating blood and prevents large molecules from entering the brain. The MIT team hopes to develop drugs that can cross this barrier, possibly by packaging them into nanoparticles.

The study, which appears in Cancer Cell on September 28, is a collaboration between the labs of Hemann; Jacqueline Lees, associate director of the Koch Institute and the Virginia and D.K. Ludwig Professor for Cancer Research; and Phillip Sharp, an MIT Institute Professor and member of the Koch Institute. The paper’s lead authors are former MIT postdoc Christian Braun, recent PhD recipient Monica Stanciu, and research scientist Paul Boutz.

Too much splicing

Several years ago, Stanciu and Braun came up with the idea to use a type of screen known as shRNA to seek genes involved in glioblastoma. This test involves using short strands of RNA to block the expression of specific genes. Using this approach, researchers can turn off thousands of different genes, one per tumor cell, and then measure the effects on cell survival.

One of the top hits from this screen was the gene for a protein called PRMT5. When this gene was turned off, tumor cells stopped growing. Previous studies had linked high levels of PRMT5 to cancer, but the protein is an enzyme that can act on hundreds of other proteins, so scientists weren’t sure exactly how it was stimulating cancer cell growth.

Further experiments in which the researchers analyzed other genes affected when PRMT5 was inhibited led them to hypothesize that PRMT5 was using a special type of gene splicing to stimulate tumor growth. Gene splicing is required to snip out portions of messenger RNA known as introns, that are not needed after the gene is copied into mRNA.

In 2015, Boutz and others in Sharp’s lab discovered that about 10 to 15 percent of human mRNA strands still have one to three “detained introns,” even though they are otherwise mature. Because of those introns, these mRNA molecules can’t leave the nucleus.

“What we think is that these strands are basically an mRNA reservoir. You have these unproductive isoforms sitting in the nucleus, and the only thing that keeps them from being translated is that one intron,” says Braun, who is now a physician-scientist at Ludwig Maximilian University of Munich.

In the new study, the researchers discovered that PRMT5 plays a key role in regulating this type of splicing. They speculate that neural stem cells utilize high levels of PRMT5 to guarantee efficient splicing and therefore expression of proliferation genes. “As the cells move toward their mature state, PRMT5 levels drop, detained intron levels rise, and those messenger RNAs associated with proliferation get stuck in the nucleus,” Lees says.

When brain cells become cancerous, PRMT5 levels are typically boosted and the splicing of proliferation-associated mRNA is improved, ultimately helping the cells to grow uncontrollably.

Predicting success

When the researchers blocked PRMT5 in tumor cells, they found that the cells stopped dividing and entered a dormant, nondividing state. PRMT5 inhibitors also halted growth of glioblastoma tumors implanted under the skin of mice, but they did not work as well in tumors located in the brain, because of the difficulties in crossing the blood-brain barrier.

Unlike many existing cancer treatments, the PRMT5 inhibitors did not appear to cause major side effects. The researchers believe this may be because mature cells are not as dependent as cancer cells on PRMT5 function.

The findings shed light on why researchers have previously found PRMT5 to be a promising potential target for cancer treatment, says Omar Abdel-Wahab, an assistant member in the Human Oncology and Pathogenesis Program at Memorial Sloan Kettering Cancer Center, who was not involved in the study.

“PRMT5 has a lot of roles, and until now, it has not been clear what is the pathway that is really important for its contributions to cancer,” says Abdel-Wahab. “What they have found is that one of the key contributions is in this RNA splicing mechanism, and furthermore, when RNA splicing is disrupted, that key pathway is disabled.”

The researchers also discovered a biomarker that could help identify patients who would be most likely to benefit from a PRMT5 inhibitor. This marker is a ratio of two proteins that act as co-factors for PRMT5’s splicing activity, and reveals whether PRMT5 in those tumor cells is involved in splicing or some other cell function.

“This becomes really important when you think about clinical trials, because if 50 percent or 25 percent of tumors are going to have some response and the others are not, you may not have a way to target it toward those patients that may have a particular benefit. The overall success of the trial may be damaged by lack of understanding of who’s going to respond,” Hemann says.

The MIT team is now looking into the potential role of PRMT5 in other types of cancer, including lung tumors. They also hope to identify other genes and proteins involved in the splicing process they discovered, which could also make good drug targets.

Spearheaded by students and postdocs from several different labs, this project offers a prime example of the spirit of collaboration and “scientific entrepreneurship” found at MIT and the Koch Institute, the researchers say.

“I think it really is a classic example of how MIT is a sort of bottom-up place,” Lees says. “Students and postdocs get excited about different ideas, and they sit in on each other’s seminars and hear interesting things and pull them together. It really is an amazing example of the creativity that young people at MIT have. They’re fearless.”

The research was funded by the Ludwig Center for Molecular Oncology at MIT, the Koch Institute Frontier Research Program through the Kathy and Curt Marble Cancer Research Fund, the National Institutes of Health, and the Koch Institute Support (core) Grant from the National Cancer Institute.

Publication: Christian J. Braun, et al., “Coordinated Splicing of Regulatory Detained Introns within Oncogenic Transcripts Creates an Exploitable Vulnerability in Malignant Glioma,” Cancer Cell, 2017; DOI:10.1016/j.ccell.2017.08.018

Source: Anne Trafton, MIT News

Olivine is the most abundant mineral in Earth’s upper mantle, which comprises the bulk of the planet’s tectonic plates. (Photo: Olivine xenoliths in basalt, John St. James/Flickr)

New research from the University of Pennsylvania gives scientists a better idea of olivine’s strength, with implications for how tectonic plates form and move.

No one can travel inside the earth to study what happens there. So scientists must do their best to replicate real-world conditions inside the lab.

“We are interested in large-scale geophysical processes, like how plate tectonics initiates and how plates move underneath one another in subduction zones,” said David Goldsby, an associate professor at the University of Pennsylvania. “To do that, we need to understand the mechanical behavior of olivine, which is the most abundant mineral in the upper mantle of the earth.”

Goldsby, teaming with Christopher A. Thom, a doctoral student at Penn, as well as researchers from Stanford University, the University of Oxford and the University of Delaware, has now resolved a long-standing question in this area of research. While previous laboratory experiments resulted in widely disparate estimates of the strength of olivine in the lithospheric mantle, the relatively cold and therefore strong part of Earth’s uppermost mantle, the new work, published in the journal Science Advances, resolves the previous disparities by finding that, the smaller the grain size of the olivine being tested, the stronger it is.

Because olivine in Earth’s mantle has a larger grain size than most olivine samples tested in labs, the results suggest that the mantle, which comprises up to 95 percent of the planet’s tectonic plates, is in fact weaker than once believed. This more realistic picture of the interior may help researchers understand how tectonic plates form, how they deform when loaded with the weight of, for example, a volcanic island such as Hawaii, or even how earthquakes begin and propagate.

For more than 40 years, researchers have attempted to predict the strength of olivine in the earth’s lithospheric mantle from the results of laboratory experiments. But tests in a lab are many layers removed from the conditions inside the earth, where pressures are higher and deformation rates are much slower than in the lab. A further complication is that, at the relatively low temperatures of Earth’s lithosphere, the strength of olivine is so high that it is difficult to measure its plastic strength without fracturing the sample. The results of existing experiments have varied widely, and they don’t align with predictions of olivine strength from geophysical models and observations.

In an attempt to resolve these discrepancies, the researchers employed a technique known as nanoindentation, which is used to measure the hardness of materials. Put simply, the researchers measure the hardness of a material, which is related to its strength, by applying a known load to a diamond indenter tip in contact with a mineral and then measuring how much the mineral deforms. While previous studies have employed various high-pressure deformation apparatuses to hold samples together and prevent them from fracturing, set-ups that make measurements of strength challenging, nanoindentation does not require a complex apparatus.

“With nanoindentation,” Goldsby said, “the sample in effect becomes its own pressure vessel. The hydrostatic pressure beneath the indenter tip keeps the sample confined when you press the tip into its surface, allowing the sample to deform plastically without fracture, even at room temperature.”

Performing 800 nanoindentation experiments in which they varied the size of the indentation by varying the load applied to the diamond tip pressed into the sample, the research team found that the smaller the size of the indent, the harder, and thus stronger, olivine became.

​​​​​​​“This indentation size effect had been seen in many other materials, but we think this is the first time it’s been shown in a geological material,” Goldsby said.

Looking back at previously collected strength data for olivine, the researchers determined that the discrepancies in those data could be explained by invoking a related size effect, whereby the strength of olivine increases with decreasing grain size of the tested samples. When these previous strength data were plotted against the grain size in each study, all the data fit on a smooth trend which predicts lower-than-thought strengths in Earth’s lithospheric mantle.

In a related paper by Thom, Goldsby and colleagues, published recently in the journal Geophysical Research Letters, the researchers examined patterns of roughness in faults that have become exposed at the earth’s surface due to uplift and erosion.

“Different faults have a similar roughness, and there’s an idea published recently that says you might get those patterns because the strength of the materials on the fault surface increases with the decreasing scale of roughness,” Thom said. “Those patterns and the frictional behavior they cause might be able to tell us something about how earthquakes nucleate and how they propagate.”

In future work, the Penn researchers and their team would like to study size-strength effects in other minerals and also to focus on the effect of increasing temperature on size effects in olivine.

Goldsby and Thom coauthored the study with Kathryn M. Kumamoto of Stanford; David Wallis, Lars N. Hansen, David E. J. Armstrong and Angus J. Wilkinson of Oxford University; and Jessica M. Warren of Delaware.

The work was supported by the Natural Environment Research Council (Grant NE/M000966/1) and National Science Foundation (grants 1255620, 1464714 and 1550112).

Publication: Kathryn M. Kumamoto, et al., “Size effects resolve discrepancies in 40 years of work on low-temperature plasticity in olivine,” Science Advances 13 Sep 2017: Vol. 3, no. 9, e1701338; DOI: 10.1126/sciadv.1701338

Source: Katherine Unger Baillie, University of Pennsylvania

Using NASA’s Spitzer and Swift missions, as well as the Belgian AstroLAB IRIS observatory, astronomers reveal new details on one of the most mysterious stellar objects.

Called KIC 8462852, also known as Boyajian’s Star, or Tabby’s Star, the object has experienced unusual dips in brightness — NASA’s Kepler space telescope even observed dimming of up to 20 percent over a matter of days. In addition, the star has had much subtler but longer-term enigmatic dimming trends, with one continuing today. None of this behavior is expected for normal stars slightly more massive than the Sun. Speculations have included the idea that the star swallowed a planet that it is unstable, and a more imaginative theory involves a giant contraption or “megastructure” built by an advanced civilization, which could be harvesting energy from the star and causing its brightness to decrease.

A new study using NASA’s Spitzer and Swift missions, as well as the Belgian AstroLAB IRIS observatory, suggests that the cause of the dimming over long periods is likely an uneven dust cloud moving around the star. This flies in the face of the “alien megastructure” idea and the other more exotic speculations.

The smoking gun: Researchers found less dimming in the infrared light from the star than in its ultraviolet light. Any object larger than dust particles would dim all wavelengths of light equally when passing in front of Tabby’s Star.

“This pretty much rules out the alien megastructure theory, as that could not explain the wavelength-dependent dimming,” said Huan Meng, at the University of Arizona, Tucson, who is lead author of the new study published in The Astrophysical Journal. “We suspect, instead, there is a cloud of dust orbiting the star with a roughly 700-day orbital period.”

Why Dust is Likely

We experience the uniform dimming of light often in everyday life: If you go to the beach on a bright, sunny day and sit under an umbrella, the umbrella reduces the amount of sunlight hitting your eyes in all wavelengths. But if you wait for the sunset, the sun looks red because the blue and ultraviolet light is scattered away by tiny particles. The new study suggests the objects causing the long-period dimming of Tabby’s Star can be no more than a few micrometers in diameter (about one ten-thousandth of an inch).

From January to December 2016, the researchers observed Tabby’s Star in ultraviolet using Swift, and in infrared using Spitzer. Supplementing the space telescopes, researchers also observed the star in visible light during the same period using AstroLAB IRIS, a public observatory with a 27-inch-wide (68 centimeter) reflecting telescope located near the Belgian village of Zillebeke.

Based on the strong ultraviolet dip, the researchers determined the blocking particles must be bigger than interstellar dust, small grains that could be located anywhere between Earth and the star. Such small particles could not remain in orbit around the star because pressure from its starlight would drive them farther into space. Dust that orbits a star, called circumstellar dust, is not so small it would fly away, but also not big enough to uniformly block light in all wavelengths. This is currently considered the best explanation, although others are possible.

Collaboration with Amateur Astronomers

Citizen scientists have had an integral part in exploring Tabby’s Star since its discovery. Light from this object was first identified as “bizarre” and “interesting” by participants in the Planet Hunters project, which allows anyone to search for planets in the Kepler data. That led to a 2016 study formally introducing the object, which is nicknamed for Tabetha Boyajian, now at Louisiana State University, Baton Rouge, who was the lead author of the original paper and is a co-author of the new study. The recent work on long-period dimming involves amateur astronomers who provide technical and software support to AstroLAB.

Several AstroLAB team members who volunteer at the observatory have no formal astronomy education. Franky Dubois, who operated the telescope during the Tabby’s Star observations, was the foreman at a seat belt factory until his retirement. Ludwig Logie, who helps with technical issues on the telescope, is a security coordinator in the construction industry. Steve Rau, who processes observations of star brightness, is a trainer at a Belgian railway company.

Siegfried Vanaverbeke, an AstroLAB volunteer who holds a Ph.D. in physics, became interested in Tabby’s Star after reading the 2016 study, and persuaded Dubois, Logie and Rau to use Astrolab to observe it.

“I said to my colleagues: ‘This would be an interesting object to follow,'” Vanaverbeke recalled. “We decided to join in.”

University of Arizona astronomer George Rieke, a co-author on the new study, contacted the AstroLAB group when he saw their data on Tabby’s Star posted in a public astronomy archive. The U.S. and Belgium groups teamed up to combine and analyze their results.

Future Exploration

While study authors have a good idea why Tabby’s Star dims on a long-term basis, they did not address the shorter-term dimming events that happened in three-day spurts in 2017. They also did not confront the mystery of the major 20-percent dips in brightness that Kepler observed while studying the Cygnus field of its primary mission. Previous research with Spitzer and NASA’s Wide-field Infrared Survey Explorer suggested a swarm of comets may be to blame for the short-period dimming. Comets are also one of the most common sources of dust that orbits stars, and so could also be related to the long-period dimming studied by Meng and colleagues.

Now that Kepler is exploring other patches of sky in its current mission, called K2, it can no longer follow up on Tabby’s Star, but future telescopes may help unveil more secrets of this mysterious object.

“Tabby’s Star could have something like a solar activity cycle. This is something that needs further investigation and will continue to interest scientists for many years to come,” Vanaverbeke said.

PDF Copy of the Study: Extinction and the Dimming of KIC 8462852

Source: Elizabeth Landau, Jet Propulsion Laboratory

login with social account

3.png

Images of Kids

Events Gallery

Currency Rate

/images/banners/muzainirate.jpg

 

As of Sat, 16 Dec 2017 15:44:52 GMT

1000 PKR = 2.758 KWD
1 KWD = 362.529 PKR

Al Muzaini Exchange Company

Go to top