Transparent solar panels could replace windows

July 17th, 2019 no comment

Researchers at the University of the Free State (UFS) in South Africa have already created a working model which they say needs further refinement to increase efficiency before it can be brought the market. They hope it will be commercialised within the decade.

“An innovation like this which can help to replace traditional means of carbon-based fuel for power generation in our daily lives would be hugely welcome,” said Hendrik Swart, senior professor at the UFS department of physics. “The idea is to develop glass that is transparent to visible light, just like the glass you find in the windows of buildings, motor vehicles and mobile electronic devices.

“However, by incorporating the right phosphor materials inside the glass, the light from the sun that is invisible to the human eye (ultraviolet and infrared light) can be collected, converted and concentrated to the sides of the glass panel where solar panels can be mounted.”

The researchers say the product will have the capacity to revolutionise affordable solar power for homes, factories, and cities. Another possible application is electric cars, where solar panel windows could be used to power the vehicle.

Lucas Erasmus, who is working with Professor Swart, added: “We are also looking at implementing this idea into hard, durable plastics that can act as a replacement for zinc roofs. This will allow visible light to enter housing and the invisible light can then be used to generate electricity.”

The study is ongoing, and UFS is experimenting and testing different materials in order to optimise the device in the laboratory. Finally, it must be upscaled to test it in the field.

IET promoted: DPSP 2020 conference – challenges for power system protection

July 17th, 2019 no comment

Richard Adams, Principal Engineer at Ramboll and Chair of the Developments in Power System Protection (DPSP) conference, discusses new technologies, the challenges around them and how they’re affecting protection engineers.

What impact are renewables having on the power system protection industry?

As we move away from traditional fossil fuels to more renewables, it’s reducing system inertia and the fault current levels we get on the system. It’s going to continue to do this because the newer generation types are connected by electronics, rather than the large synchronous generators we’ve been used to.

Traditionally, we’ve relied on very high fault current levels to be able to detect a fault. But with decreasing levels, the feeling is that it’s going to become more difficult, or may be impossible, to differentiate between actual load conditions and fault conditions.

Has industry found a solution to decreasing fault current levels?

Some people think that these new converter-based short sources should be configured so that under fault conditions they provide more current, acting in a similar way to traditional synchronous generation.  Others argue that we shouldn’t be doing this because high currents stress equipment, and we should really find new ways of detecting fault conditions.

One way we can do this is by travelling waves, which have been used for many years for fault location, but not very much for detecting and isolating faults. Rather than fault current, they rely on the voltage and the characteristics of the circuit for fault detection. We may need to come up with other methods as well.

What about other greener technologies like electric vehicles? Are they having an impact on power system protection?

There is talk of electric vehicles being used as a source of supply under emergency conditions. So because they’ve got batteries, if the network needed it, you could actually draw electricity from these batteries for a short period to meet the demand. That’s going to mean that the power is then flowing from the vehicle back into the network.

Traditionally, the distribution networks have been configured only to draw power and not to provide power, so that could affect some directional protections and it may mean that networks need to have protection modifications.

So I think there are some challenges ahead and it remains to be seen exactly the extent of those challenges and how the infrastructure needs to be modified.


 What affect has the ‘digital revolution’ had on the industry?

Modern relays are digital by design and provide a lot more opportunity for integration and more protection functions than the older electrical mechanical relays ever did. In the past, there would be one relay per function, whereas now you can have a whole host of functions within one relay.

The integration of functions reduces cost and the space required in the substations, which means we can reduce the size of buildings and land required. The flip side is that modern relays have a shorter life span than the older relays.

Do the modern relays create any challenges?

Modern relays contain a lot of functions, but there may be some which you don’t want to use. It’s important that we make sure that they are turned off and that they’re not going to cause any problems in service.

Also, the modern equipment has either ethernet, USB or wireless access connectivity. This could be very useful for upgrading the functionality or settings and remote access, but it creates a vulnerability, a little gateway into the substation, which some people could exploit.

In theory you could have someone sitting in a vehicle outside of a substation, trying to hack into the control system or relays from just a few metres away, without physically going in and doing anything.

How is industry responding to the complexity of modern relays?

A lot of utilities remain very conservative, understandably due to the implications of lost supply. I don’t think wireless access will be used for some time yet, unless there are a lot of guarantees in terms of security.

I think the future protection engineer is going to need to know a lot more about control, communications and cyber security because it’s very difficult to separate them now. What might have been handled by a number of different departments in years gone by is becoming one department with multiskilled engineers.

 

Richard Adams is chair of DPSP 2020, taking place in Liverpool on 9th -12th March 2020

We are accepting abstracts for the conference programme until Friday 19th July 2019

Find out more by clicking this link theiet.org/dpsp 

Apollo 11 and us: pioneering the man-machine interface

July 16th, 2019 no comment

No one who grew up dreaming of being an astronaut will have imagined it might involve merely being some anonymous component of a complex feedback system. That’s not what being an astronaut is. Being an astronaut is being in the driving seat, right? Blast-off. Landing the Eagle. Re-entry. Heroics. But, believe it or not, the test pilots and military aviators selected to become Nasa’s first astronauts had a fight on their hands to retain that status.

Nasa trained them with a view to their behaving like mechanical components within their spacecraft; it seemed, after all, the best way to deliver safe and successful missions. The Apollo lunar landing relied on a new kind of systems engineering known (with blinkered 1960s sexism) as the Man-Machine Interface. But which of the two elements would turn out to be the most reliable: the human or the technological?

David Mindell is professor of the history of engineering and manufacturing at the Massachusetts Institute of Technology (MIT). His book ‘Digital Apollo: Human and Machine in Spaceflight’ explores how ‘fly-by-wire’ automation was pioneered for Nasa’s space modules, and then migrated into the cockpits of jet fighters and the flight decks of civilian airliners, and even into the dashboards of cars. Mindell is our chief witness in this brief exploration of why Apollo’s guidance protocols, part human and part automatic, have had such a far-reaching effect.

The major themes of this story first emerged more than a century ago. “Even in the earliest days of aviation, there was a debate about a human’s role in the handling of flying machines,” Mindell says. “Was an aviator supposed to be the cautious chauffeur of an inherently safe machine, or the daredevil pilot of a tricky but exciting one? The fighter aircraft of the First World War were notoriously unstable, but highly manoeuvrable. Learning to fly could be as dangerous as facing the enemy.” On the other hand, larger and more stable bomber aircraft were not much use for the cut and thrust of aerial dogfighting. “There was a strong prejudice among pilots for unstable aircraft because they wanted to be seen as people with a mastery of the skies.”

A mid-20th century image emerged of handsome heroes in white scarves and leather helmets hauling at the controls as their planes screamed through the air. This may have contained a grain of truth in an age where the control sticks fed directly via wire cables to an aircraft’s flaps and ailerons, but from the 1950s and onwards, supersonic jets created such powerful airflows around their wings that human ‘stick and rudder’ muscle power alone could not direct them. Powered actuators took over, and the control stick in the cockpit became a distant echo of mechanical actions rather than a direct driver. As Mindell points out, this introduced a new problem. “How could a pilot know how much, or how little, effort to put into a stick movement? After the Second World War, the aerospace industry began borrowing feed-back techniques from electrical engineering. The human pilot was considered now as a component in a feedback system.”

By 1960 Nasa was experimenting with hypersonic X-15 rocket planes, which pushed into the highest realms of the atmosphere where the air thins almost to nothing and wings have no purchase. Rocket thrusters controlled an X-15’s attitude (its orientation relative to the horizon). Sensory feedback, whether real or conjured by avionics, was no longer a reliable guide for pilots in such a strange environment, and the X-15 proved hard to handle. Ground simulators tested the ‘feedback loop’ relationship between pilots and the systems that actually steered an X-15, but “unconsciously, pilots behaved differently during actual flights”, says Mindell. “The possibility of losing his life made a pilot more sensitive in a real situation than in a simulation, sometimes causing the X-15s to become unstable as a consequence of pilot-induced oscillations.”

Engineers began thinking in terms of a pilot’s ‘gain’, just as we would regard the volume control on a stereo. How much force should pilots apply to their control sticks, and what force should they feel apparently coming back into the stick from the actuator systems? It was hard to quantify anything as nebulous as the human psyche, and some of Nasa’s rocket engineers began to wonder about skipping that problem altogether.

Even before the first manned space missions began in 1961, nuclear missiles proved the concept of autonomous piloting. The underwater-launched Polaris missile had a range of 4,600km (2,500 miles) and could deliver its warhead with an accuracy of 1km, guided by a gyroscopic inertial measuring unit, and accelerometers allied to a remarkably compact electronic computer whose outputs fed into the missile’s steerable rocket engine. An upgraded version of such a machine could reach the Moon unaided, so perhaps human involvement would be more of a hindrance than a benefit?

In the summer of 1959 a convention of test pilots and engineers met at a hotel in Santa Monica, California, while Wernher von Braun, the German-born designer of Apollo’s huge Saturn V rocket, delivered an unsettling lecture.

“When you consider the velocities and forces involved in missile launchings, you realise that human intervention is not only impossible, it is actually undesirable. There is little time for intelligent reaction during powered flight.” Astronauts were great at many tasks, but as rocket pilots they would be “outrageously slow and cumbersome”. Von Braun even suggested they could be anaesthetised to alleviate the g-force discomforts of launch.

It looked as if von Braun had won the argument. Chuck Yeager, the test pilot who first broke through the sound barrier in 1948 flying a rocket-powered X-1 aircraft, had no interest in human space flight because it seemed to him less like piloting and more like becoming, in his famous phrase, “spam in a can”. But the trend away from human control of rockets seemed unstoppable. Gene Kranz, renowned throughout the 1960s for his steely demeanour in Nasa’s Mission Control, knew that astronauts alone could not control machines operating at hypersonic velocities. As he recalled to historians in 1998: “In aircraft flight tests, we were moving at five miles a minute. A spacecraft moves at five miles a second, and your thought processes have to come to grips with this incredible change in dimension. In one stroke, we’d moved so far forward that human brains weren’t adequate. With computers, we started getting ahead of the game.”

According to Mindell, “astronauts feared they would be enclosed in capsules aboard automated rockets”. However, another factor was just as important as the engineering. Nasa’s manned projects were a critical element of Cold War prestige. The fantasy of astronauts with ‘the right stuff’ going head-to-head against Soviet cosmonauts caught the public imagination. America in the 1960s was not ready for machines alone to conquer the new frontier.

“Science involves collecting observations to learn about the natural world, whereas exploration expands the realm of human experience,” says Mindell. “Justifying spaceflight was ultimately a human rather than a technological aspiration. Nasa did in fact find roles for human operators that allowed them to ‘fly’ their craft in new and unexpected ways.”

The astronauts used their power as public figures to push for some degree of control over their capsules. The question became this: how could their decision-making be incorporated into vehicles that had no real need for those inputs? According to Mindell, “We have to wonder, did Nasa’s engineers design those systems, knowingly or unknowingly, to leave the astronauts a sense of mastery?” The answer to that question is complex. Apollo’s piloting regime emerged as a social and political construct as much as a technical one. It was the first true partnership between humans and autonomous systems, because this is what we, as a species, demanded of it.


Apollo 11 Lunar Module illustration 1

Image credit: Science Photo Library

This subtle truth is illustrated by the final approach and touchdown of Apollo 11’s Lunar Module (LM) on 20 July 1969. “Nothing about the LM was intuitive to fly,” says Mindell. “It had 16 thrusters for attitude control, and a large descent engine on the bottom that pivoted on gimbals to steer the spacecraft. These and other complexities meant that the LM had no natural match to a human pilot. Only its computer, the software, and a host of sensors and actuators gave the astronauts the feeling that they were ‘flying’ the vehicle.”

A Landing Point Designator (LPD) enabled mission commander Neil Armstrong to fine-tune his desired touchdown area by nudging a hand controller, so that he could avoid rocks and craters. The onboard Apollo Guidance Computer (AGC) calculated in near-real-time how the inherently unstable LM, hovering delicately on its rocket plume like a book balanced on a pencil, should deliver the moves that Armstrong wanted.

After the mission, he defended his “god-given right to be wishy-washy about where I was going to land”. But he made that landing by relaying his command decisions to the AGC via the world’s first digital fly-by-wire system, whose defining characteristic, beyond the main task of getting Apollo to the Moon, was to smooth out oscillations and keep the inputs and outputs of the humans and the LM alike within safe limits.

A series of computer alarms on board Apollo 11’s LM during final descent caught the media’s attention, and have been misinterpreted ever since. The August 1969 edition of Electronic Design magazine wrongly explained how Armstrong “seized the manual controls of the LM” in response to a malfunctioning AGC. In fact, “what the computer did next was not a bug in the program, but a manifestation of robustness in the design”, says Mindell. The AGC put non-essential tasks to one side to avoid a memory overload caused by an unexpected clash between two radars. One ‘non-essential’ task was the visual display that was to reassure the astronauts that the AGC was working. Although the display repeatedly froze, “Armstrong could still feel the LM responding to his inputs”. The AGC often had to restart various operations, “but Armstrong could not even feel the hiccups”.

Mindell has no patience with lazy comparisons between Apollo’s computer and a digital watch. “Simply focusing on memory size, or the computer’s speed, misses the point. Who among us would risk our lives by relying on any number of modern computers? Apollo’s AGC was rugged and reliable, and it never failed in flight.”

Perhaps the big question is: could an AGC have landed an LM unaided by humans? Apart from the possibility of skidding on a boulder, the answer is ‘yes’. Nasa’s unmanned Surveyor lunar probes proved this (and a Soviet Lunokhod probe with a large unmanned wheeled rover also landed safely in November 1970). Apollo 15 veteran David Scott admitted in a June 1982 lecture at the Computer Museum in Boston, Massachusetts: “The fact that the computer could land an LM automatically indicated that a tremendous payload could be sent if the astronauts were removed.” He also said: “I believe in computers, but when I’m about to touch down on the Moon, I’m going to do that, not the computer. You have to have your hands on the stick. You are probably fooling yourself, because you are still going through the computer. You feel different, though.”

In 1963, senior Apollo project manager Joe Shea told his Nasa colleagues: “For a while I was afraid that Apollo might be one of the last battlefields on which the human race took up arms against machines.” But he concluded that this wasn’t a war so much as a new symbiosis. “The terms manual and automatic carry more emotional than technical content.” Mindell agrees, and disparages “the myth of full autonomy, the utopian idea that robots can operate on their own. The machine that runs entirely independently of human direction is useless. Only a rock is truly autonomous. For any apparently autonomous system, we can always find the wrapper of human control that makes it useful.” 


Apollo 11 Lunar Module illustration 3

Image credit: Science Photo Library

Software

Driverless?

In his book, David Mindell reminds us: “60 per cent of the software for the first digital fly-by-wire aircraft, developed in the early 1970s, consisted of Apollo code.” The recent crashes of two Boeing 737 Max aircraft, caused apparently by clashes between software working with bad instrument data and pilots struggling to regain control, seems to highlight the worst aspects of the Apollo Guidance Computer’s many descendants, until we remember a simple fact. Before the emergence of fly-by-wire, 1,000 deaths per year was a commonplace figure in civil aviation. Today, fatalities annually amount to around 12 for every billion passenger journeys. Flying really is the least hazardous mode of transport, and this safety record is down to the fine-tuning of the relationship between humans and semi-autonomous machines pioneered by Apollo.

Can similar systems transform our experiences on the road? Despite all the noise heralding the emergence of driverless cars, Mindell is reminded of space pioneers such as von Braun, who thought that astronauts might prefer to snooze while their rockets cruised to orbit. “Fly-by-wire in aviation is highly regulated, and the human overseers are extremely well trained,” he says. “Airplanes operate in a rarefied environment with relatively few obstacles, and despite all this, the recent Boeing accidents tell us that even a small change in software can end in tragic accidents. Cars are far less regulated, drivers are barely trained, and there’s a lot more stuff around to crash into. When it comes to the safety of passengers, drivers or operators, or whatever you want to call the human occupants, it’s going to be very challenging to make genuinely autonomous cars.”

Could fly-by-wire concepts help with another, more subtle problem, by damping down the terrifying ‘pilot-induced oscillations’ of online political discourse? The more we click on unreliable news feeds, the more nonsense the system feeds back to us, until the bad-quality material predominates. In December 2017 former Facebook executive Chamath Palihapitiya warned, “the dopamine-driven feedback loops that we have created are destroying how society works”.

Unfortunately, “social media is driven by a certain kind of capitalism, rather than by the desire for safe outcomes”, says Mindell. “Boeing and Airbus don’t compete on safety. In fact they share data because safety is essential to both companies. In contrast, social media companies compete for our attention, rather than collaborating to deliver a stable outcome.” And anyway, asks Mindell, “Who would determine what constitutes a useful outcome? Using technology to enforce social stability is a scary idea. The Nazis were good at it.”

In 1958, Kelly Johnson, the designer of the Lockheed SR-71 Blackbird (the fastest piloted aircraft ever made) summed up his ideal balance between emerging fly-by-wire technologies and the inputs of Blackbird pilots. “The system should be stable,” he said. “But not too stable.” Perhaps that’s a good way to think of the best balance between us and digital as we continue to navigate through the modern Human Machine Interface.


Apollo 11 Lunar Module in flight

Image credit: Nasa

Mapping

The Moon

Ordnance Survey recently commemorated the 1969 Moon landing by creating a map of the site using Nasa open data, depicting the landscape where Armstrong took his steps.

When choosing a landing site, Nasa’s Site Selection Board identified five possibilities for Apollo 11. The original requirement that the site be free of craters was relaxed and a site within the Sea of Tranquility was chosen.

The map includes the names of the landing site, craters, lunar mares, bays, mountain ranges, ridges, valleys and trenches.
In making a map of the Moon, the creators had to identify available data that represents the Moon’s topography. There were a lot of Digital Elevation Models, or DEMs, of the Moon, which are 3D representations of a terrain’s surface.
Paul Naylor, the creator behind the OS Moon Map, made a landscape legend to the south of the map. It included, among other things, the map title, credits, timeline of the Apollo 11 mission and extent map.The labels were then cartographically positioned and coloured to match the main aesthetics of the map topography.


Moon map

Image credit: Ordnance Survey

UK Space Agency bids to develop comms system for future Moon base

July 12th, 2019 no comment

The Press Association (PA) reports that SSTL, a manufacturer of small satellites based in Guilford will lead the bid which is being supported by the UK Space Agency.

The system will allow astronauts and rovers on the Moon to more easily communicate with the Gateway and Earth.

The Gateway is intended as a future outpost to serve as a laboratory and short-term accommodation post for astronauts exploring the Moon.

While the project is led by Nasa, the Gateway is meant to be developed, serviced, and used in collaboration with commercial and international partners.

First proposed in 2012, the facility is expected to be built some time in the 2020s and will be critical for future missions designed to expand a human presence to the Moon, Mars, and deeper into the Solar System.

The UK Space Agency’s head of space exploration Sue Horne said it was also bidding for the refuelling features of the joint effort.

“Europe – hopefully, if we get sufficient subscriptions – will be building the habitation module and the service module,” she told the PA news agency.

“In the UK, we would like to do the communications system and the refuelling element but there will be a lot of competition for the refuelling element.

“I think on the refuelling element, it’s probably 50/50, we have a much better chance of getting the communications – we have a strong communications industry.

“Looking to the future Moon programme, there’s a lot more commercial activity and there is a UK company that is planning to develop a commercial communications service around the Moon, because there is India, Israel, the US and China all sending missions, there is a demand for communications services, so there is a company, SSTL, who are looking to put that service in place and the UK Space Agency is helping them in that endeavour.”

In November, the next round of funding decisions will be determined by the European Space Agency (ESA), of which the UK is a member.

Horne said Italy was the biggest possible competitor for the UK on the communications side, while the French and Germans were challengers for refuelling.

“There’s a lot of technology we have to develop and the best place to test it out is on the Moon,” she continued.

“It’s nearer, and therefore cheaper and easier to test it out on the Moon, so we need to use the Moon as a test bed to enable us to do the more distant places like Mars.”

But science historian James Burke, who led the original programme covering the 1969 Moon landing with Sir Patrick Moore, has said that going back to the Moon is a waste of money.

He said he does not believe there is much political desire to put humans on the lunar surface due to the high costs, but indicated that the Chinese are the ones to watch for further space exploration.

“(Donald) Trump wants to go back to the Moon, Nasa talks about going to Mars, I frankly think that there is no political appetite for doing either of them in America, either the effort or the money and the expenditure,” Burke told PA.

“Where there is, or rather where public opinion doesn’t matter, and where there’s loads of money, is China.

“My bet will be we’ll see a Chinese landing on Mars within the next 10 years.”

Location, location, location: precision tracking with What3Words

July 12th, 2019 no comment

In the early part of last year, a young woman in the Humberside area of the UK was abducted and taken to a locked room where she had no idea of her location. However, she could still dial the emergency services on her mobile phone. The operator was able to send her a text message containing a hyperlink, which enabled the phone to communicate with the worldwide Global Positioning System (GPS) to identify her location to within just a few feet and display it to her as a unique sequence of three words. The woman read that sequence out to the operator, who was then able to send a police car to the exact location.

That is because that emergency service now exploits a capability developed by a London-based company with the eponymous name What3Words. The crucial role it played was confirmed in an official statement by the Humberside Police Force: “A victim of sexual assault was being held hostage, not knowing where she was. A call-handler talked the victim through What3Words and the three-word address was passed to dispatchers, resulting in the recovery of the victim and the capture of the offender.”

The company has covered the whole surface of the Earth with squares measuring 3×3 metres and assigned each of those squares a unique identity that is represented by a sequence of just three words chosen from a vocabulary of 40,000 English words. By combining GPS signalling with a simple app on a device, such as a smartphone, a user can identify their own location and communicate it to someone else or identify another location anywhere else in the world with equal precision.

There are, in fact, 57 trillion such squares, though the other 35 languages that the system currently serves manage to get by with vocabularies of just 25,000 words since they only cover landmasses and territorial waters. They include Arabic, Chinese, Russian, Tamil, Thai and Turkish as well as all major western European languages.

The company’s chief marketing manager, Giles Rhys Jones, confirms what is involved: “We have a very simple algorithm that turns long, complex GPS coordinates comprising 18 digits into three-word addresses, or the other way around, and that is it.”

The underlying premise, says Rhys Jones, is that the world is still “badly addressed”. Even in developed countries, locations such as industrial estates may not have a street and numbering system, while elsewhere such a system may simply not exist at all. Nor will it exist in temporary areas of habitation such as refugee camps created as a result of war or natural disaster. However, the What3Words system can remedy all such deficiencies immediately.

The company was set up just six years ago when CEO Chris Sheldrick found that he could not ensure accurate delivery of equipment for pop groups he was booking to exactly the right unloading point at venues. Collaborating with Mohan Ganesalingam, now the company’s chief research officer, and a third co-founder Jack Waley-Cohen, now head of corporate development, the idea flourished and was turned into a reality.

The developmental task was not so much to do with writing software than with compiling an appropriate vocabulary from which to form the three-word identifiers. Basic rules included “no rude words”, but also “no homophones”. For instance, both ‘sail’ and ‘sale’ have been excluded. Rhys Jones indicates that every language requires the expertise of around 30 language consultants to compile the necessary word lists, which then become the raw data from which the system’s basic algorithm generates the three-word addresses.

The system is strictly “non-hierarchical” so that similar-sounding words are never used to identify locations in close proximity. It is therefore impossible for a user to be guided to a nearby but erroneous location by inputting a slightly incorrect word sequence. Hence ‘table.chair.damp’ is in Connecticut, US, and ‘table.chair.lamp’ is near Sydney, Australia. The system will show the specified location but suggest alternatives closer to the user’s current location. “If users get it wrong we want them to get it horribly wrong,” says Rhys Jones. The business model for the company involves it being paid a fee by other organisations that want to use it in services they provide. The app is free to download.

It is now included as a standard feature in the navigation system of all new Mercedes-Benz cars and the German car maker has become a shareholder in the company. The recent enhancement to accept voice input allows appropriate hands-free interaction with the system. That upgrade required the system to be trained through exposure to hundreds of thousands of recordings of different voices with different accents.

The system has also been adopted by the postal system of the country most commonly associated with the notion of address-free nomadism – Mongolia. Other applications range from hotel directories to UN-Asign, a crowd-sourcing app that allows individuals to collect and disseminate information about danger points such as flooding or damaged buildings in disaster situations.

Future development initiatives include adding new languages, increasing the number that accept voice input and integrating an optical character-recognition capability. The company also indicates that a substantial increase in the number of UK emergency services that use the system from the current 18 out of 44 is in the offing.

The system’s ability to guide rescuers to people in real danger is still a cardinal feature. When another woman in the Avon area of the UK drove a car containing herself and a young child off a road and into a ditch, she followed the same procedure as the kidnap victim. The official statement from Avon and Somerset Police confirms that “the victim was unable to describe where she was, so we used What3Words to get her to share her location and effectively deployed resources to the scene”.

High definition satellites used to detect when bridges are at risk of collapse

July 10th, 2019 no comment

Combining data from a new generation of satellites with a sophisticated algorithm, the monitoring system could be used by governments or developers to act as a warning system ensuring large-scale infrastructure projects are safe.

The system was developed in a joint project between Nasa Jet Propulsion Laboratory (JPL) and the University of Bath.

They verified the technique by reviewing 15 years of satellite imagery of the Morandi Bridge (pictured) in Genoa, Italy, a section of which collapsed in August 2018, killing 43 people. The data showed that the bridge did show signs of warping in the months before the tragedy.

University of Bath lecturer Dr Giorgia Giardina said: “The state of the bridge has been reported on before, but using the satellite information we can see for the first time the deformation that preceded the collapse.


“We have proved that it is possible to use this tool, specifically the combination of different data from satellites, with a mathematical model, to detect the early signs of collapse or deformation.”

While current structural monitoring techniques can detect signs of movement in a bridge or building, they focus only on specific points where sensors are placed. The new technique can be used for near-real time monitoring of an entire structure.

Jet Propulsion Laboratory lead author Dr Pietro Milillo said: “The technique marks an improvement over traditional methods because it allows scientists to gauge changes in ground deformation across a single infrastructure with unprecedented frequency and accuracy.

“This is about developing a new technique that can assist in the characterisation of the health of bridges and other infrastructure. We couldn’t have forecast this particular collapse because standard assessment techniques available at the time couldn’t detect what we can see now. But going forward, this technique, combined with techniques already in use, has the potential to do a lot of good.”

This is made possible by advances in satellite technology, specifically on the combined use of the Italian Space Agency’s COSMO-SkyMed constellation and the European Space Agency’s Sentinel-1a and 1b satellites, which allows for more accurate data to be gathered.

Precise synthetic aperture radar (SAR) data, when gathered from multiple satellites pointed at different angles, can be used to build a 3D picture of a building, bridge or city street.

Dr Giardina added: “Previously the satellites we tried to use for this research could create radar imagery accurate to within about a centimetre. Now we can use data that is accurate to within a millimetre – and possibly even better, if the conditions are right. The difference is like switching to an Ultra-HD TV – we now have the level of detail needed to monitor structures effectively.

“There is clearly the potential for this to be applied continuously on large structures. The tools for this are cheap compared to traditional monitoring and can be more extensive. Normally you need to install sensors at specific points within a building, but this method can monitor many points at one time.”

The technique can also be used to monitor movement of structures when underground excavations, such as tunnel boring, are taking place.

“We monitored the displacement of buildings in London above the Crossrail route,” said Dr Giardina. “During underground projects there is often a lot of data captured at the ground level, while fewer measurements of structures are available. Our technique could provide an extra layer of information and confirm whether everything is going to plan.”

Last year researchers from Brunel University in London created a “digital twin” for a local bridge to monitor its condition and head off problems before they occur.

China officials insist Three Gorges Dam is safe, as online rumours of collapse rise

July 9th, 2019 no comment

Writing on its official website, safety experts with the government-run China Three Gorges Corporation said that the Yangtze River dam had moved a few millimetres due to temperature and water level changes, but safety indicators remained well within their normal range.

It has been widely discussed on social media that the dam has become ‘distorted’, after a Twitter user posted satellite photos from Google Maps purporting to show the dam had bent and was at risk of breaking.

The dam’s operators are insisting that such distortions, detailed in the satellite images, are normal and that safety has not been compromised.

“With distortions, the dam body is in an elastic state,” the China Three Gorges Corporation said. “All data are within the design limits. All structures are operating normally and the project is operating safely and reliably.”

The central government has said the problem is with the satellite imaging, rather than the dam, according to a statement reported by the Caixin financial news service.

Fan Xiao, a Chinese geologist and long-standing critic of giant dam projects, said the rumours reflected the lack of debate about the Three Gorges project, which was now considered a “national treasure” that should not be criticised.

“If talking about problems is stigmatised, then it is nothing more than putting one’s head in the sand and deceiving oneself,” Fan posted on his WeChat account on Monday. “It will solve no problems and could make them worse.”

The Three Gorges Dam was created to serve three primary purposes: flood control for the millions of people living downstream, including the cities of Wuhan, Nanjing and Shanghai; hydroelectric power production, generating approximately 22,500MW annually from 32 main turbines, and navigation improvement along the Yangtze River, enabling a sharp rise in the number of cargo ships and tourist cruise ferries.

Begun in 1993 and eventually completed in 2009, the 185-metre wide dam has proved to be one of China’s most expensive and controversial engineering projects, permanently submerging entire cities, towns and villages, displacing millions of people and disrupting wildlife ecosystems, including in all probability causing the extinction of the baiji Yangtze river dolphin, as well as posing an ongoing serious threat to the critically endangered Siberian crane.

Critics of the huge engineering project also say that it has increased earthquake and landslide risks in the region. In the first four months of 2010 alone, 97 significant landslides were recorded.

In 2011, China admitted that the project had caused widespread social and environmental damage and promised 124 billion yuan ($18 billion) in extra funding for those affected. However, earlier this year, a Chinese parliamentary delegate said that half of the promised money had still not been paid out.

The Three Gorges Dam is so vast that in 2010 NASA scientists calculated that the shift of water mass stored by the dam complex would increase the length of the Earth’s day by 0.06 microseconds and make the Earth slightly more round in the middle and flat on the poles.

China officials insist Three Gorges Dam is safe, as online rumours of collapse rise

July 9th, 2019 no comment

Writing on its official website, safety experts with the government-run China Three Gorges Corporation said that the Yangtze River dam had moved a few millimetres due to temperature and water level changes, but safety indicators remained well within their normal range.

It has been widely discussed on social media that the dam has become ‘distorted’, after a Twitter user posted satellite photos from Google Maps purporting to show the dam had bent and was at risk of breaking.

The dam’s operators are insisting that such distortions, detailed in the satellite images, are normal and that safety has not been compromised.

“With distortions, the dam body is in an elastic state,” the China Three Gorges Corporation said. “All data are within the design limits. All structures are operating normally and the project is operating safely and reliably.”

The central government has said the problem is with the satellite imaging, rather than the dam, according to a statement reported by the Caixin financial news service.

Fan Xiao, a Chinese geologist and long-standing critic of giant dam projects, said the rumours reflected the lack of debate about the Three Gorges project, which was now considered a “national treasure” that should not be criticised.

“If talking about problems is stigmatised, then it is nothing more than putting one’s head in the sand and deceiving oneself,” Fan posted on his WeChat account on Monday. “It will solve no problems and could make them worse.”

The Three Gorges Dam was created to serve three primary purposes: flood control for the millions of people living downstream, including the cities of Wuhan, Nanjing and Shanghai; hydroelectric power production, generating approximately 22,500MW annually from 32 main turbines, and navigation improvement along the Yangtze River, enabling a sharp rise in the number of cargo ships and tourist cruise ferries.

Begun in 1993 and eventually completed in 2009, the 185-metre wide dam has proved to be one of China’s most expensive and controversial engineering projects, permanently submerging entire cities, towns and villages, displacing millions of people and disrupting wildlife ecosystems, including in all probability causing the extinction of the baiji Yangtze river dolphin, as well as posing an ongoing serious threat to the critically endangered Siberian crane.

Critics of the huge engineering project also say that it has increased earthquake and landslide risks in the region. In the first four months of 2010 alone, 97 significant landslides were recorded.

In 2011, China admitted that the project had caused widespread social and environmental damage and promised 124 billion yuan ($18 billion) in extra funding for those affected. However, earlier this year, a Chinese parliamentary delegate said that half of the promised money had still not been paid out.

The Three Gorges Dam is so vast that in 2010 NASA scientists calculated that the shift of water mass stored by the dam complex would increase the length of the Earth’s day by 0.06 microseconds and make the Earth slightly more round in the middle and flat on the poles.

Amazon seeks US approval to launch over 3,000 internet satellites

July 8th, 2019 no comment

The firm’s plans to provide broadband internet from space were revealed in April when GeekWire reported that it had submitted three sets of filings with the International Telecommunications Union (ITU).

Kuiper Systems, a subsidiary of Amazon, has filed its application with the FCC. According to the filing, the 3236 satellites will be placed in a satellite constellation of 98 different orbital planes with altitudes ranging from 589 to 629km above the surface of the Earth. The satellites will use Ka band frequencies, which allows for high bandwidth satellite communication. This band will be used in the James Webb Space Telescope and the Iridium Next telecommunications satellite series.

Satellite internet remains expensive, but allows for wide access, high data speeds, and provides reasonably low latency for satellites in LEO.

Amazon states that its satellite constellation could bridge the “digital divide” by providing connectivity to rural and other underserved parts of the world, helping “tens of millions of people who lack basic access to broadband Internet.” However, Amazon has requested a waiver on a requirement to serve the entirety of the US, as its proposed satellite constellation would not cover some parts of Alaska.

It has also marketed Kuiper as a means for providing mobile LTE connectivity to underserved areas.

“Amazon seeks to maximise the potential of spectrum and orbital resources available to advanced NGSO broadband constellations, providing  high quality broadband service to customers while simultaneously enhancing spectrum efficiency and spectrum sharing with other authorised systems,” the Amazon filing says.

The satellite constellation will be able to use existing infrastructure – such as data centres and fibre – which is used to support Amazon Web Services (AWS).  

No timeline for launch was included in the FCC filing, although Kuiper Systems has said that satellite broadband could be offered soon after the first launch phase, which will involve a batch of over 500 satellites. The satellites are likely to be launched by rockets developed Blue Origin, which is funded by Amazon CEO Jeff Bezos.

While satellite constellations have attracted concern due to their potential to contribute to space debris, Amazon has stated that the satellites would be set to deorbit themselves in less than 10 years.

An attempt by Microsoft to create a constellation of LEO satellites in the Ka band was abandoned in 2003, after mounting costs of more than $9bn. Microsoft’s failure dampened enthusiasm for similar ventures for the next decade, but a small handful of companies have recently entered the race to provide broadband internet via satellite constellation. Elon Musk’s SpaceX has been given permission by the FCC to deploy up to 7000 satellites and has already launched 60, while OneWeb and Facebook have laid out plans to build satellite constellations for the same purpose.

Window film balances indoor temperature by absorbing sunshine for later release

July 8th, 2019 no comment

The molecule has the unique ability to capture energy from the sun’s rays and release it later as heat.

The technology could both improve the indoor temperature variance of buildings equipped with the film, as well as reducing the energy used for air conditioning or heating.

The developers of the film, who work at Chalmers University of Technology, Sweden, say that when their specially designed molecule is struck by the Sun’s rays it captures photons and simultaneously changes form.

When the Sun stops shining on the window film, the molecules release heat for up to eight hours after the Sun has set.

“The aim is to create a pleasant indoor environment even when the Sun is at its hottest, without consuming any energy or having to shut ourselves behind blinds. Why not make the most of the energy that we get free of charge, instead of trying to fight it?” said chemist Kasper Moth-Poulsen, who is leading the research.

At dawn, when the film has not absorbed any solar energy, it appears yellow or orange since these colours are the opposite of blue and green, which is the light spectrum that the researchers have chosen to capture from the Sun.

When the molecule captures solar energy and is isomerised, it loses its colour and becomes entirely transparent. As long as the Sun is shining on the film it captures energy, which means that not as much heat penetrates through the film and into the room.

At dusk, when there is less sunlight, heat starts to be released from the film and it gradually returns to its yellow shade and is ready to capture sunlight again the following day.

“For example, airports and office complexes should be able to reduce their energy consumption while also creating a more pleasant climate with our film, since the current heating and cooling systems often do not keep up with rapid temperature fluctuations,” said Moth-Poulsen.

The molecule is part of a concept the research team calls MOST, which stands for Molecular Solar Thermal Storage.

Previously, the team presented an energy system for houses based on the same molecule. In that case – after the solar energy had been captured by the molecule – it could be stored for an extended period, such as from summer to winter, and then used to heat an entire house.

The researchers realised that they could shorten the step to application by optimising the molecule for a window film as well, which would also create better conditions for the slightly more complex energy system for houses.

What the researchers still have to do is to increase the concentration of the molecule in the film whilst also retaining the film’s properties and lower the cost. Moth-Poulsen believes that both are achievable in the near future.