"Choppers." When Corporal “Radar” O’Reilly spoke that word on the hit television show “M*A*S*H,” the 4077th Mobile Army Surgical Hospital went into go mode.
- Information collection, processing and overload have characterized the past two generations.
- The need for speed, order and security is driving new technology developments.
- What war fighting initiates and funds, commerce adapts and exploits.
Something unseen was coming, and everyone hustled to get prepared. Thanks to Radar, the doctors and nurses serving during the Korean War with the fictional 4077th had a fairly good idea what was clearing the horizon.
Today, we know something’s in the offing, but pinning down what it is can be a challenge. Technology is the stuff of specialists who bring to market products that merely a generation ago were the wild-eyed prophecy of science fiction. Even the things we know exist are so far beyond our ken that we are dependent on a new breed of Gnostics to keep us from looking like Neanderthals.
That hands-on technology includes the magic screen in our office that holds all your work-related information and sometimes refuses to hand it over. It is that box’s Mini-Me version that also serves as a phone and rudely decides when your business sales calls will end in the middle of your prospect’s sentence. It is also your car, whose dashboard now does things no one seems to know how to stop it from doing.
Spinning in this technological bobbin are advances in computer processing, satellites, radars, broadband, imagery capture, electromagnetism, knowledge of the deep sea, and ventures into and back from outer space. Braided into that thread are the newest threats to your clients’ reputations, income, private information, health, personal safety and employee wellbeing.
Imagine eco-friendly cities of the future with elevated monorails unlike any you’ve ever seen. An electromagnetic girder system might cover entire cities and allow cars to be pulled up, hung by their roofs on the track, and slid to their destination depot. Homes and commercial buildings would be made of solar absorbent materials that would feed the power needs of occupants. Food would be ordered from farms and brought to stores fresh for pickup, eliminating much of the power needs and waste generated by current mass grocers.
While the broad term “technology” encompasses a field that is too wide to conceive, we can competently plan for new achievements, new trends and new responses. Compartmentalizing the world of technology seems offensively pragmatic, but it’s a necessary evil. It may be helpful to break down the universe into a few neighborhoods, vast though they are. Data, exploration, energy, transportation and commodities cover the spectrum.
Making Sense of Big Data
Our ability to collect enormous volumes of data quickly has surpassed our ability to analyze it all. Much of what we could know lies in cyber trash cans or vast holding bins in digital limbo. For it to be used, we need to ask the right questions. But asking good questions requires us to have enough knowledge to know what we don’t know and what we need to know.
The Barcelona Supercomputing Center cardiac computational model holds promise for doctors to be able to run personalized cardiac simulations on each patient so diagnoses and treatment protocols are highly tailored to the individual instead of relying on generic studies that necessarily must report results based on the norms of large groups.Tweet
“We have a lot of really great data,” says Dave Changnon, an applied climatologist at Northern Illinois University and a nationally recognized expert on using historical data to analyze the loss costs of snowstorms. “We just need to know how to apply it. Big data is sitting out there waiting for application parameters.”
The key is arranging the data so it makes sense. The Geographic Information System, or GIS, first designed by Esri in 1969 for land-use planning in Maryland, has blossomed into a global application with more analytic uses than can be named. GIS is a computer model that allows the user to layer data in ways that are especially meaningful. The military, for example, uses GIS for targeting, battle damage assessments, evacuation planning—just about everything. Weather forecasters use it to determine elements conducive to catastrophic events. GIS can be used for marketing, emergency response, land development, risk management and underwriting.
Big data is used for risk modeling, but its applicability to insurance goes far beyond that. It underpins artificial intelligence efforts—think IBM’s Watson supercomputer—it assists city planners, and it’s part of space exploration, medicine, climatology, the hedge fund and mutual fund industry, and just about every other scientific and marketing endeavor now under way.
“Turning data into knowledge is key,” says Frank Summers, an astrophysicist and science visualization specialist at NASA’s Space Telescope Science Institute in Baltimore. “It’s incredibly easy to collect mountains of data. What we need to do is translate that into a visual for people.”
Summers is an expert at doing just that. In March, he helped set the Guinness World Record when he co-taught the largest astronomy lesson ever conducted, at Austin’s South by Southwest festival. “We process visual information effectively and intuitively,” Summers says. “Using computer code, we can subject those intuitions to mathematics to organize what we see and then can answer questions about patterns.”
That brings us back to Changnon’s key point about parameters. The insurance industry needs to formulate exactly what it wants to know to capitalize on the vast availability of data. For the specialists to do their jobs, they need to hear from the generalists—the underwriters, risk managers and brokers—exactly what the generalists need. The specialists also need proprietary data that is concealed behind the silicon curtain, particularly data on insured location and loss costs.
Scientists, Summers says, “take disparate pieces of information to create new knowledge. We correlate sea surface temperatures with plankton abundance, then, from core samples drilled in the ocean floor, we can tell the temperature of the ocean over thousands of years. Do we want to determine patterns of personal behavior? What data in what combinations should we ask for? Counterterrorism, consumer patterns—it’s all doable.”
Guy Carpenter, a New Jersey-based reinsurance brokerage, has been using big data and making it useful for years. A platform called i-aXs (pronounced “I access”) allows the firm’s modelers to overlay broad data on a carrier’s more specific data.
“We overlay historical data on flooding, storms, etcetera, then use satellite imagery in the area to identify other compounding risks that exist, such as fuel storage facilities, railroad depots, sinkholes,” says John Tedeschi, managing director and head of GC Analytics America at Guy Carpenter. The analysis allows the insurer to understand their clients’ level of risk.
GIS technology is a “massive component” of i-aXs, according to Tedeschi. “GIS can be used in claims and risk management, underwriting, pricing, aggregate management (are we insuring too many of a certain type of risk?) and marketing (we have a geographic pocket where we’ve got no business),” Tedeschi says.
To access data, Guy Carpenter buys satellite imagery to regularly refresh its information stockpiles. “After an event, we talk to our satellite vendors, who work with the imagery providers, who will try to get us a shot of what we need to see,” Tedeschi says. “In terms of pure computational technology, what you can do with hardware is exponentially greater now than it was six years ago. It’s the same for software.”
Not only is the computational demand increasing, the way the data are accessed and applied is changing. “You not only have to keep up with the speed,” Tedeschi says, “there’s also the change in delivery mechanisms, such as tablets.” Providers of data and analysis have to be as nimble as the market for end-user gizmos. If end users want to see results on their iPhone, data providers respond.
The demand for smart analysis is also broadening, including an appetite for big data analysis on profitability and marketing. Guy Carpenter’s Profit Point+ product responds to that appetite, looking at the return on risks in specific locations. Profit Point+ helps with the allocation of capital and other costs of risk, which can be translated into a visual to help clients understand the profitability associated with insureds. Since many insurance-linked securities and reinsurance transactions are attached to very specific geographic areas, the program provides needed granularity.
“As new capital capacity and regulation come to the marketplace, skilled analytical individuals capable of working with Big Data will be needed to help quantify and manage risk,” Tedeschi says.
That’s where generalists need to collaborate with specialists. “We need computer programmers [to write code], but we need people to ask the right questions so the programmers can write the right programs,” Summers says. “We also must teach critical thinking,” he says. His kids go to a school in the Baltimore area whose motto is “Learn to think.”
Students in a handful of high schools nationwide are learning GIS as part of an elevated science curriculum. They’re taught to master the basics of the system, find free data from worldwide public databases and ask the right questions. These high schools are churning out students with ready-made risk-management skills that are transferrable to any discipline, says Mary Schaefer, a Fairfax County, Va., high school teacher who trains students on the program. But who in the insurance industry knows about these students? Who is branding their corporate mark in these young learners’ minds?
It’s not just the questions that are challenging, Summers notes. “We need to appreciate that we are grappling with complex subjects that have answers that are complex,” he says.
Beyond qualified specialists and smart generalists, however, the insurance industry needs machines—and software—that can keep up. Supercomputing is part of the intelligence equation. “Processing of data is no longer the bottleneck,” Summers says. Today’s challenges are more likely to be transmitting information within the computer, ensuring the computer’s memory is fast enough to store information and preventing the energy produced by the central processing unit from melting the machine’s innards.
Developments in mathematical algorithms, hardware interfacing, and component cooling are all underway to increase processing speed and store larger amounts of data in ever-smaller devices.Tweet
“The pipedream is quantum computing,” Summers says. “Remember, as you try to do more faster, you also have to get smaller and smaller. Lithography is on a nanometer scale right now. Chip fabrications are getting infinitesimal. What’s the limit on going smaller? The individual atom? That’s where quantum physics comes in. Can you somehow manipulate individual atoms to hold information? Transmit and process information? Right now, we’re nowhere near that. But that’s the dream.”
That said, scientists have successfully established qubits. A qubit is a quantum bit, or unit of quantum information, that allows the unit of computer information to be in a position of two states at the same time in superposition to each other. A qubit stores a 1 and a 0, and scientists can send a signal to flip the 1-0 positions.
“In astronomy, we have an analog for that spin—up or down—and can detect spin flip radiation,” Summers says. “We currently can’t do any kind of analysis, though, because we’re talking about needing billions of transistors on a chip.” But that’s where the science is trying to go—so, so small. So, so much, in such a small, small space. And fast.
Into Deep Sea
The Hubble is so 20th century. Satellites are about as exciting as another moonwalk. We have an unmanned vehicle teleporting pictures of Mars. And we demoted Pluto from planet status. So what’s next? How about discovering the secrets of the depths of the earth’s oceans?
About 71% of the earth is covered in ocean, according to the National Oceanic and Atmospheric Administration, and 95% of the underwater world is unexplored. More than one sixth of jobs in the United States are marine-related, and more than a third of its gross national product originates in coastal areas, NOAA says. Worldwide, the deep sea holds promise for the exploitation of further petroleum and gas resources as well as important mineral stores.
The trick is finding out what’s down there and where. Many new commercial and scientific ventures are under way to discover both environmental and commercial interests. And those ventures are no longer the exclusive activity of highly developed nations.
In April, for example, a South Korean company, LS Cable & System, announced that it had successfully deployed an umbilical cable to help guide remotely operated vehicles (ROVs) in the deep sea. The cable is critical for supplying power and communication signals to ROVs, according to the company, so that robotic arms, sensors, cameras, and drive and steering systems retrieve needed data.
The first of its kind to be developed in South Korea, the cable is designed to endure extreme undersea conditions, such as high water pressure and tumultuous currents. It can operate to 6,000 meters under the sea. Cables with such capability previously had to be imported from Europe and the United States. According to the company, “competition for deep water undersea resources is growing fiercely…as natural resources on land are being depleted.” LS Cable says it also expects a “significant increase in the demand for umbilical cables” to support deep-sea ROVs.
The Catlin Group, the global specialty property-casualty insurer and reinsurer, has embarked on an ambitious endeavor that uses cutting-edge technology to study the sea. The project, known as the Catlin Seaview Survey, is part of an intense mapping of coral reefs, starting with Australia’s Great Barrier Reef and Coral Sea. The survey is designed to establish a baseline of coral reef health around the world so that future measurements can be benchmarked against a historical database to determine the change in the reefs over time.
The project has produced some amazing images, viewable at www.catlinseaviewsurvey.com. This was the second major scientific project undertaken by Catlin, which has grown from $1 billion in value to $5 billion over 10 years. The first was a study of changes in arctic ice.
The Catlin project includes a deep-water survey, complete with camera-equipped robots and the Australian-developed SVII camera, which can be used as deep as 100 meters and is operated entirely by a tablet computer. The survey uses diving robots to explore reefs from 30 to 100 meters below the surface, areas about which little is known. Once the mapping is complete, as many as 50,000 panoramas will be accessible via Google Earth and Google Maps.
The photographs are GPS-coordinated, so the sites can be revisited later for a time-lapse comparison. The advances in mapping and imaging technology have enabled scientists working on the survey to take a serious step forward in cataloguing the world’s reefs, says James Burcke, head of communications at Catlin. “There are questions out there,” Burcke says. “What will environmental changes mean for policymakers, insurers, and humanity? The reefs, which are a great source for pharmaceuticals, could decrease by 90% in the next 50 years. If barrier reefs die, what does it mean for cyclones, erosion or other coastal concerns? These are the questions we seek to answer over time using compiled data.”
The sea is also the new frontier for energy. To accomplish sea-based energy exploitation that goes beyond offshore drilling, companies need to know what’s where. The field of mapping has exploded, enhancing our ability to identify various states of matter. LiDAR immediately comes to mind. The Light Detection and Ranging optical remote sensing technology measures speed, rotation, distance, and chemical composition and concentration. It can study a target that is a clearly defined object, such as a mountain, or a diffuse object, such as a gaseous plume.
LiDAR wasn’t particularly useful for sea studies since it penetrated only a few centimeters into the water, but new developments in the field are moving LiDAR’s capabilities into the ocean. In March, the world’s first underwater LiDAR system tailored to the needs of the subsea oil and gas industry was introduced by CDL, a global engineering company founded in Scotland with a corporate base in Houston and branches in Brazil and Singapore.
According to CDL, the product, InScan, is a three-dimensional, underwater LiDAR system that beams a light (laser pulse) to collect high-resolution clouds of subsea assets and the surrounding environment. The data it collects can be processed and modeled into GIS or CAD systems. The company envisions broadening InScan’s application to freshwater industries, such as dam and bridge construction and verification, as well as environmental monitoring.
That kind of technology will be critically important for developing deep sea fuel exploration and development, but it will also aid the growth of wind energy turbines, an industry already flourishing in Europe but only in its nascent stages elsewhere.
The physical challenges of offshore wind are numerous. Since the turbines are built and maintained at sea, they carry construction perils similar to oil rigs. “These are enormously tall structures that need to be erected at sea, working off of barges, and anchored to the seabed,” says Patrick Jeremy, founder and president of PowerGen Claims, a loss adjuster specializing in electrical power generation. “These turbines are growing in their ability to generate more power. The blades are getting broader and more efficient, and the gearboxes, their generators and their support equipment are all improving. This is where the development in the industry is.”
Getting those gearboxes to run more efficiently is key to making offshore wind energy financially attractive in the United States and other countries that are currently petroleum-dependent. “GE, Siemens, Vestas, the big boys are really developing these units to get more power out of the wind,” Jeremy says. “But we really need undersea cables that can bring the power ashore from out there in the sea. Those don’t exist yet. They need to be developed and laid, and there needs to be a broad plan in place for how to store the power once it’s transmitted from the turbine. This kind of infrastructure requires splicing, junction boxes, connections—all with a few million tons of seawater crushing in. That’s very hard to engineer.”
There is some work going on, though. Jeremy predicts there will be a couple offshore wind farms within a decade. Massachusetts looks promising to him, but the farm would be limited to the bay area. It has been approved but suffers from NIMBY (Not In My Back Yard), Jeremy says. The technology is considered unsightly, and farms might disrupt fisheries and the recreational industry that thrives along coastal shorelines. “Virginia will probably be first,” he says.
A repair industry will be needed to sustain any further development of wind farms. With onshore wind, if a turbine breaks, workers take the rotor off and set it on the ground. If a breakdown occurs at sea, they would need a barge and a crane and the cooperation of the weather and the seas. “And we haven’t even talked about hurricanes,” says Jeremy. “Additionally, if you lose your gathering system, those are the lines that bring the power ashore, the whole system goes down. We’ll need the technology to make deep sea repairs to cable.”
The European Wind Energy Association’s website has a wealth of information on how the offshore wind energy farms work, the vessels that are needed and the supporting industry. The association expects a 33% increase in the European Union’s share of electricity powered by wind by 2030 and a 50% increase by 2050. For that to happen, and for wind to become a viable alternative energy elsewhere, there needs to be improvements in system operation, design, transmission infrastructure, offshore grids, a long-distance overlay grid and cross-border electricity market integration for power sharing and dissemination. And then there are regulatory and financial hurdles, including insurance.
A recent report by GBI Research indicates the global energy storage market will grow by about 55% from 2011 to 2016. Asia is assessed as the leading regional market for energy storage, followed by the U.S. and Europe. Capacitor-based energy storage is one of the most promising sectors, growing at more than 10% through 2016. The market encompasses advanced standard lead-acid batteries, fuel cells, pumped storage, superconducting flywheels, ultracapacitors and more, all of which promote balanced and efficient electrical grid systems.
Solar Power at Night
There is one project attempting to improve thermal energy capacity during non-peak production hours. The Energy Department’s Advanced Research Projects Agency—Energy, is trying to use the properties of simple and complex fluids to increase their ability to store heat. If the company, Navitasmax, is successful, solar and nuclear facilities will be able to release electricity at times of day when they currently can’t produce. That will boost the cost-efficiency of utility-scale solar power plants, which currently run at about 25% of capacity because they can’t generate power at night. Nuclear power plants produce a constant power output. Improved energy storage would allow variable production, improving output during peak demand hours and scaling back during low-demand periods.
In 20 years, your doctor will have a Titan the size of an iPad 4 in front of you at his office,” says Mariano Vazquez, co-head of an international research team at BSC. “Right now, we are constrained by physics—the speed of interfacing computers and the information within them, but that is an area of concentration.Tweet
Consumer Demand Is Driving Technology
Developments in imagery, communications, transportation, and the speed and storage of data have historically been driven in the U.S. by military needs and funded by the Department of Defense or related national security budgets. Now, however, there’s an increase in commercially motivated technology development.
High-speed air travel, bullet trains, even luxury space travel are global growth industries. Consumers are propelling the development of wireless communications and miniatures of everything electronic, forcing chip makers to find ways to ever smaller, ever more heat-tolerant conductors. Microchips and radio frequency ID chips are already used to identify horses and dogs and could be applied to automobiles and other property as well. Parents might even choose to use it to help protect their children.
Wearable computers, which just a few years ago were considered Star Trekian, are now on the market. Internet-connected eyeglasses embedded with GPS or other computerized information became commercially available for early testers in April from Google, which is seed-funding companies that develop applications for Google Glass. The company aims to make a full consumer version available by the end of the year by which time it also expects to roll out its Knowledge Graph—a database of the 500 million most searched people, places and things in Google-land. The program autoloads contextual data that jibes with your search phrase to generate many relevant correlations. That includes written and photodigital information as well as voice data in as many languages as Google can access. The company’s “signals of salience” help it ever more accurately link data to searches. The goal? As Google co-founder Larry Page said in a 2004 interview, “Search will be included in people’s brains.”
“Eventually,” Page said at the time, “you’ll have the implant, where if you think about a fact, it will just tell you the answer.”
If you aren’t satisfied with simply computerizing your view, you can also wear digital clothing. Some of it can be wired to display a message. Some is static but comes from a 3D printer instead of a department store clothing rack. In March the actress Dita Von Teese modeled the first mega-release of a printed dress, a black gown made from powdered nylon, produced on a 3D printer and connected by 3,000 little joints to make a figure-hugging dress. Such 3D printing ultimately could produce just about anything, including body parts, with the right material in the printer. In fact, the first handgun from a 3D printer has been made, and the designer was licensed as a gun manufacturer in March. It shoots bullets and is not detectable by an airport metal detector, which has government officials concerned.
All this technology is not only expanding demand for high-tech commercial products, it’s expanding the need for better logistics facilities in producing countries, many of whose ports, warehouses and energy and transportation infrastructure have not kept up with manufacturing, storage and shipping needs. But watch as China and other major Asian producers make serious advances over the short-term to solve that deficiency.
What it all means for property, liability and specialty coverages is part of what we’re in the process of discovering. The big bang may have been the first explosion, but the universe is riding a new powder keg whose force no longer increases geometrically. Exponential change can be an organic growth compound for those with attuned radars.