Skip to content

Essay

Questioning Springtime Everywhere | countermapping the ‘Pretty Earth’

Stu Sontier / May 2024 

Introduction

My post-photographic work over the last three years has looked at anomalies in different aspects of Google Earth as I attempt to follow the developments in algorithms that drive it. These position Google Earth in the company’s wider corporate strategy especially concerning the ethical implications.

This discussion will cover the photo-related aerial and satellite imagery of Google Earth in relation to the truth-value assigned through its proximity to photographs and cartography. It will look at some of the algorithmic processes that call into question a complete reliance on the implied veracity of what is seen. By also looking at the colonial tendencies of previous cartography along with the well-meaning but somewhat blinkered cultural outlook of a US-based, globally active commercial entity along with its defence and military contracts, it will show some potential future issues.

[aerial / early photography]

As with most technical innovations, it’s hard to point to examples of the first aerial photograph, but both Nadar and James Wallace Black were photographing from hot air balloons in the late 1850’s. Birds and kites have also been deployed to produce early top-down visioning of the earth.  Sub-orbital images date to 1946 and the first satellite photographs were in 1959.

This fascination with looking down – a voyeuristic tendency – didn’t start with photography, but it was the first time that accurate views could be captured and shared. Early photographers exploited the naïve joy of capturing elements of reality from above, but as is the way, not far behind were practical applications for problems – military intelligence and colonial ambitions being a significant pair.

[Photography and cartography as evidence]

To summarise heavily, I’ll generalise that since its beginning, there have been three significant crises questioning the veracity of photography.

Alice and the Fairies, July 1917 © Elsie Wright & Frances Griffiths / Science Museum Group collection

The first was around the 1970s, contemporaneously with the rise of postmodernism, where the singular point of view of a still image was called into question and photography was used in much more inward looking ways, as well as conceptually. The photograph did not tell a story without an accompanying context.

The second came in the 90s when again the truth-telling nature of photography was said to have been decimated by digital cameras and the ease of image editing.

Both of these crises continue to be interrogated and inform the work of many photographers.

In 2024, we are seeing a third crisis acting on the credibility of the image, due to the ease of fabrication by a number of algorithmic processes.

The question of truth in photography has in fact been under question since its birth, seen perhaps most notably in the rejection of photographs as evidence in some early courts, along with the deceptive images of fairies1 and spiritual beings that were common at that time and fooled notables such as Sir Arthur Conan Doyle (who also believed in photographs that ‘proved’ spiritualism).

In many courts, photographs are admitted as evidence only when there is testimony from either the camera operator or a witness. Without such, courts could question the accuracy of the process. 

The ‘neutral’ stance of the documentary photograph has been repeatedly challenged, perhaps no more agressively than by photographer Martha Rosler. In her 1981 “In, around and afterthoughts (on documentary photography)” she attacks the underpinnings of much (American) socially-interested work stating “Documentary, as we know it, carries (old) information about a group of powerless people to another group addressed as socially powerful.”2

Despite all this questioning, the seductive nature of the photographic image is carried over to aerial imaging. In many ways, the technical nature of its production, coupled with a lack of understanding of the detail of the processes (which results in a compelling black box) yields extra truth value to aerial and satellite images. In reality, such images should be read with caution – awareness that buildings and other tall shapes will be foreshortened at certain heights and of course the choice of map projection introduce significant distortions.  In Google Earth, transitions between satellite and aerial imagery introduce unknown alterations, and in 3D mode, aerial to ground view goes through quite unusual transitions. Currently the transition from ground level view to street-view and to user-uploaded imagery is very obvious, but it’s likely that Google may in the future make even these transitions seamless. 

In the (pre-internet age) book “How to Lie with Maps”3, Mark Monmonier discusses the numerous ways that cartographers have purposely and unintentionally bent the truth with maps. He notes that “map users often fail to appreciate the map’s power as a tool of deliberate falsification or subtle propaganda”.

He makes it clear that maps must by definition leave some detail out and notes that “a single map is but one of an infinitely large number of maps that might be produced … from the same data”.

Geographer Jeremy Crampton argues, in the book “The Hyperlinked Society: Questioning Connections in the Digital Age” (in a chapter titled “Will Peasants Map”4) that “for most of its history, mapping has been the practice of powerful elites” although he optimistically sees examples of populist ‘countermapping’ where cartographic control is put back in the hands of the public.

[Satellite imagery and evidence]

The system of admitting evidence into court has already been mentioned in relation to photography. When it comes to satellite imagery, the courts again take a cautious approach5 due to the lack of precedent as well as full legal understanding of the systems at work. Privacy issues also weigh on this admissibility6.

The issue of forgery is one that bedevils photography, and courts see digital imagery as potentially more fraught because of the potential for data to be altered without detection. While this has validity, there are also digital forensic tools that can detect many forms of image and data editing. In fact, in a kind of cat and mouse game, forgery in images, whether physical or digital, soon results in techniques for detecting forgeries. This same game is now occurring with faked audio and video such as deep-fakes and in detecting AI-created texts.

A text considered a standard on court evidence in this respect – “Evidence from Earth Observation Satellites”7 8  states that “it is imperative to supervise the process of obtaining the image from the moment it is collected as primary data right up to the time it is used in court”. The number of these processes, including algorithmically complicated image-processing, make this somewhat impractical and so courts often defer to the expertise of companies that have practical input into at least part of the processing.

Much of the concern around false imagery relates to an ‘outside’ ability to alter data. To date though, there seems to be little awareness or concern about data altered by or during the actual capture and processing. One source does spell this out – noting that governments have often forced the alteration of aerial and satellite imagery. Examples include the downgrading of resolution for publicly available imagery and the obscuring of detail for military or other reasons, but there are inherent processing aspects which should also be interrogated. 

In artistic work from 2011, Mishka Henner9 showed how Google Earth clumsily obscured military locations in the Netherlands under government request. These scenes can still be found in 2005 timeline data, and can be seen to have been removed by 2010. However some literature implies that location obscuring can be made much more subtly. The realistic but algorithmically generated fictional maps on thiscitydoesnotexist.com10 make this abundantly clear. 

Yandex Maps still blurs military sites in Turkey and Israel, and it appears that the French Government has successfully convinced Google to obscure prison sites since 2018.11


In his 3rd edition Monmonier refers to images presented to the UN by the US government. In 2003 US Secretary of State Colin Powell relied on annotated satellite images as part of a package of evidence to ‘prove’ Saddam Hussein had weapons of mass destruction (WMDs). This evidence presaged the US invasion of Iraq in 2003 – a war abetted by the UK, Poland and Australia that continues to have repercussions in Middle East stability.

Satellite image expert Bhupendra Jasani, said at the time “When I look, I can’t be sure what I’m seeing,” noting that even the locations weren’t clear.12

Since WMDs were never subsequently found, it’s clear that (while the images themselves may have been accurate) the annotations by US intelligence agencies were fabricated13

In writing about the images of torture in Abu Ghraib by the US military that were revealed during this invasion, Phillip Gourevitch said that:  “photographs cannot tell stories. They can only provide evidence of stories, and evidence is mute; it demands investigation and interpretation.”14

Direct faking of satellite images is not unprecedented either. China has reportedly been implicated in introducing bridges and roads into satellite images using GAN techniques15.

In a less subtle approach, Russia bolstered their fake narrative around the downing of the Malaysian Airlines MH17 over Ukraine in 2014 by volunteering two doctored satellite images. However, the open source investigative collective Bellingcat has shown16 how to contradict Russia’s MH17 evidence quite simply using just Google Earth. Initial satellite images put out by Russian MoD and dated in July were shown to more likely have been made in May and then doctored. Bellingcat later crowdfunded to buy DigitalGlobe satellite images that confirmed their initial results.

[Springtime everywhere– introducing Google Earth]

Gopal Shah, product manager at Google Earth and the friendly face on video interviews and Ted Talks, commented that Google wants “to create a mirror earth”17. Elsewhere, Shah refers to the zoomed out mosaic mega-image of Google Earth as “Pretty Earth – an image of the planet that’s cloud-free and springtime everywhere.”18

Google describes how its world map is made as one huge mosaic. Users interact with this through a browsing interface that feeds us snapshots in which we can pan, tilt, fly across and zoom in on.

The term ‘Snapshot’ is appropriate because its history is one of transparency – vernacular imagery with no agenda, no ulterior motive; a benign, often boring everyday grab of the real, mundane world. But while we often think about satellite images as photographs, it’s worth remembering that technically they eclipse this because of the lensing and sensors that can capture much wider aspects of the electromagnetic spectrum as well as being much more selective on the frequencies detected. These segmented data streams allow for more algorithmic processing to be applied to the visual and other data. For instance infrared bands allow cirrus cloud to be identified and removed. As well, this data is geo-located and multi-temporal. 

The multiple-perspective view that a satellite gives – where parts of images are built up by a camera that moves, rather than being stationary – changes the implications for the brain.  It’s reminiscent of the way Andreas Gursky19 made some of his panoramic images, by stitching together views from a camera that was moved to face directly at a wide building rather than the single-point view of most panoramics. David Hockney’s multiple-polaroid pictures20 have a similar relation to perspective. “So images appear to be transparent and offer unmediated viewing; the position of the camera seems to be invisible” – so write Martin Dodge and Chris Perkins in the introduction to “The ‘view from nowhere’?”21

Despite these process-based differences, satellite images, as presented to a user, are most often related directly to the photographic. Dr Robert Tovey, in a paper “God’s Eye View – The Satellite Photography of Google”22, argues that the photographic nature of Google Earth renders it unquestionable, its background invisible and gives it a “potency beyond traditional cartographic representations”.

Google Earth image credibility thus benefits from two different sources – one from the apparent credibility of cartography and the other, that of the still existing belief in the reality of the aerially-based photograph. My own thesis is that use of mapping products have normalised these views over the last 17 years and become so embedded in a general cultural sense that we tend to believe what we see on mapping platforms even when cues are there to make it clear the data is faulty.

[Satellite imagery and algorithms]

For nearly three years I’ve investigated and manipulated visual samples that implicate Google Earth imagery in a semi-fictionalised human-centric modification of data that has practical function but may lull the average user into false notions about what they are viewing.

In the early days of Google Earth, interest came from artists like Henner, Clement Valla23 and Jenny Odell24, along with other ‘post-photographers’ who used street-view to look at more human-based views of road-ways and other accessible areas.

Artistic interest appears to have dwindled since those early investigations. Behind the scenes though, it’s clear that algorithms are being tuned, and no doubt new ones introduced to deal with efficiency in data collation, processing and delivery amongst other things.

Google only talks in broad terms about the ways algorithms and machine learning work on its processes, but some aspects can be understood from technical papers and patents as well as considering the problems that need to be solved. The quantity and quality of image data that needs processing has ballooned and keeps rising. Broadband take-up means more users want more data faster. Improvements in 3D combine with the number of locations that have been mapped and modelled. 

‌Apple Maps‌ expert Justin O’Beirne25 noted one development, which is the machine-learned ability to create building footprints, where, from satellite and aerial views “computer vision techniques extract detailed 3D models”. In Google Maps this appeared to start back in 2012 in just a few areas but by 2017 had expanded hugely. In visual examples he showed how the level of detail in both the footprints of buildings and the roof and exterior detailing raced ahead of Apple’s product.  By 2024 this has extended well beyond the US. In my own region, Auckland, Hamilton, Wellington, Queenstown and Christchurch have detailed 3d rendered buildings showing air-conditioning, chimneys and other external structures, although Dunedin still has a few clunky SketchUp models of buildings. O’Beirne shows how the mapped footprints coincide with the mesh-based 3D models that have been gradually added to Google Earth. As well, the automated extraction and labelling of objects  allows for Points of Interest to appear on maps.

Google has explained a little about the process behind finding building outlines: “Buildings are landmarks and a key part of how someone knows where they are when looking at a map… The Google data operations team worked to trace common building outlines manually, and then used this information to teach the machine learning algorithms which images correspond with building edges and shapes. This technique proved effective, enabling Google to map as many buildings in one year as they mapped in the previous ten years.”26

It also notes how machine-learning algorithms can detect new buildings and update the map without the need to remap. O’Beirne again shows this happening as he describes how automated the process must be – buildings can appear even before the Street View camera arrives.

Methods for delivery of high quality data has seen innovations that Google patented, known as the Universal Texture, which Clement Valla referenced in his work. Partly relying on creating illusions of depth with mip-mapping, where an inverted pyramid of varying image-resolutions is stacked. By clipping these stacked image sets in smart ways, algorithms reduce the amount of data needing to be delivered.27

The increase in consumer computer power allows machines to crunch the image data with better video capabilities, and to introduce more sophisticated 3D imagery. In fact Google refers to this 3D view as “mind-blowingly realistic insane” which we’ll see can easily be shown at times to be hyperbole.

Although O’Beirne sees most of these developments as just “cool”, he does note that highlighting of Areas of Interest implies how confident Google must be in its accuracy, and that the joining of Street View with building detail implies that Google may have internals on many buildings too. Coupling these insights with what we’ll discuss later relating to Google’s military contracts should suggest some less than cool implications. 

[3D]

a small collection of trees (a bottle of meths in the mangroves) – stu sontier

Some of my own work involves investigations of the 3D modelling option in Google Earth. As one zooms in from the initial satellite view, there is a seamless display that moves through various sources such as aerial photography right down to Street View. But if 3D is selected, Street View is augmented by 3D models in many cities and in the USA even in rural environments. These models attempt to accurately describe the urban spaces, including cars and trees.
In 2013 Google announced the modelling of 20 new species of tree that would appear in its urban 3D places, on top of the 50 or so that were announced in 2010. 

The 3D rendered tree in Google Earth can also throw up some interesting signs of problems with both object-recognition and texture mapping. The following pair of images is of the same object, where it has obviously been assigned the label of some kind of tree in the 3D view:

Given the seeming lack of improvement in tree modelling more recently, it may be that Google sees no interest in being able to model trees or buildings more accurately at the moment, or the cost of data re-working is too high. But there is every possibility that satellite and other data could have enough resolving power to show much more detail and personal identification, meaning that Google Earth could start having similar privacy problems to those that surfaced with Street View and resulted in obfuscation of identifying areas of images. Google already does do some blurring in Google Earth and removes some vehicles for clarity. 

Google’s 3D modelling has extended to many places although generally is confined to urban and commercial areas. Initially using SketchUp, many scenes are now created from photogrammetry applied to multiple position aerial images. Photogrammetry is again a computationally heavy series of algorithms that creates 3D meshes by analysing multiple images, calculating distances to objects and applying feature extraction. A process similar to triangulation locates the objects in space, although there are many possibilities for errors so error detection, correction and clean-up processes must be applied. The resulting surface meshes can then be overlaid with textures also taken from the image.

Although the computing power going into this 3D modelling is immense and is impressive at some zoom levels, the 3D view can show Google Earth at its weakest (but most interesting from a glitch point of view). Presumably the modelling at close range will get better, but since building structures don’t currently block a user from entry, the internal meshes can easily be seen. In fact, whole landscape meshes can be explored. These are the holes that Google still leaves for exploration.

when we took the forest apart (for our pleasure) manly #1 – stu sontier

Google is pushing the concept of its seamless, realistic view of the world that it has created and keeps refining, at the same time that the imagery it outputs has had more and more intervention. It’s not clear for instance, when you zoom in, that you are passing through layered imagery that varies in time-frame and has had pixel-level changes with colour normalisation, data replacement, combinations of 2D, 3D imagery and meshes with texture maps as well as, at times, user-generated material.


GIS specialists have noted that caution should be used when using satellite images. My argument is that this seems unlikely to happen in many cases. In fact visual literacy with respect to satellite imagery is if anything less sophisticated. Culturally, we’ve gained a natural ability to overlook the clues that we’re looking at heavily mediated imagery. We now have the skill of seeing through the glitch, discarding the stitching errors and the loading patterns in order to see the reality that we expect. We don’t know which parts of the image have been modified, and once images are composited at the pixel level it maybe doesn’t even make sense to think of them in that way. 

These unseen technological workings have the ability to rearrange and change our relationship with digital visual information. Philosopher Vilém Flusser, in ‘Towards a Philosophy of Photography’28 refers to the camera as a black box which the operator uses within defined and proscribing limits. The functionaries (users) are generally unaware of the mechanics of production and risk being controlled by, rather than controlling, the outputs. Google Earth is similarly a black box, where users risk being in the dark over decisions made.

[Algorithms and hubris]

Although Google’s workforce is multicultural, the dominant ideologies driving the company derive from a much more limited area. The Silicon Valley tech-bro culture arguably displays a limited view of the world while at the same time it has gained power to influence governments and depoliticises the world’s problems offering solutions purely based on technology. 

In response to difficulties with labelling displayed areas of Crimea, Palestine and the West Bank, Geospatial engineer and Google ‘tech evangelist’ Ed Parsons said in 2014 “I guess, naively perhaps, we hoped we could have one global map of the world that everyone used, but politics is complicated.”29

With the almost magical abilities of algorithms and a history of (sometimes partial) technologies applied to complex problems, it’s easy to get lost in hubris. A group identifying solar powered houses using machine learning techniques on Google Earth congratulated themselves on achieving results, saying “Even for machines, practice makes perfect!”30

Esri, a US-based GIS company based in California touts the powers of its ArcGIS products in this way “Using the Pixel Editor, you possess super powers to replace clouds and shadows in your imagery with useful data.”31

As humorous as some of these examples are, they point to the commercial need to be forever optimistic about technologies, if only because your success and sales depend on it.

Google does make it clear that they know some limitations. In 2010 Nicaragua started dredging on a border with Costa Rica only to find that Google Maps had mistakenly shown that the land belonged to the former country, causing a political dispute. A Google statement referred to the high quality of its maps but cautioned “by no means should they be used as a reference to decide military actions between two countries”.32

Another example of hidden cultural myopia is a Google Earth feel-good feature called This Is Home. Gopal Shah tells us in a Ted Talk33 that “we get invited into the homes of people from around the world”. As a white Western user, this power to step into Samali’s house in Lombok, or the Pahou Marae with Albert Stewart and see how their lives are lived is awesome. But what is this ‘from around the world’? What if I want to step into the house of Prabowo Subianto, upcoming Indonesian president, or that of Winston Peters, current foreign minister to Aotearoa, or even have a look at how Gopal Shah sets up his living environment? It’s interesting how odd such requests sound, but they indicate hidden limits on what cultures we can voyeuristically visit. There are bounds to who can be in view – the wealthy and especially the European and white are conspicuously absent.

Critique of this style of presentation references that of many old forms of social documentary photography. For instance in Martha Rosler’s essay2 she refers harshly to those “in which members of the ascendant classes are implored to have pity on and to rescue members of the oppressed”, which she says, belong in the past.

 

[Cloud removal and nodata areas]

obscured by cloud – outtake – stu sontier

Google first referred to their cloud-free mosaic publicly in 2013, with Matt Hancher’s blog announcement that ‘This stunning new imagery of the earth from space virtually eliminates clouds’. Hancher gives a partial description – “Mining data from a large number of Landsat images of each area allowed us to reconstruct cloud-free imagery even in tropical regions that are always at least partly cloudy.” 34 35

Hancher describes the process as similar to the way their time-lapses were created: “We wrestled with how best to visualize areas with missing or cloud-obscured images from each year. In the end, after much experimentation, we chose to simply interpolate between valid image years. Other techniques, such as greying out invalid data, created distractingly large artifacts. However, the downside with the approach we have taken is that it can be difficult to tell which data is original and which is interpolated. We are exploring the possibility of including a view that allows drilling down into the non-interpolated, original mosaics.” 

The highlighting is my own, but Google thus allude to problems that arise when doing this interpolation. As far as I know, there is still no public solution that makes interpolation clear to the average user.

Google has put a lot of time and code-based energy into cloud removal to build the clear-viewed mosiac that is ‘Pretty Earth’ for which they are justifiably proud.
Numerous technical papers can be found, describing various algorithmic techniques along with code that will implement this in the Google Earth Engine (GEE) tool36 – an extension of Google Earth that gives access to the raw datasets as well as a coding environment. Sentinel2 imagery has cloud mask information – QA60 – that indicates whether opaque and cirrus clouds are present and built-in functions in GEE can, amongst many other things, detect and delete clouds and replace with new image data in just a few lines of code.

Sentinel Hub, another engine for processing satellite data, notes that QA60 is just a binary classifier and so have worked to build what they term the s2cloudless dataset37. “The s2cloudless image provides a cloud presence probability between 0 and 100 percent that you can use to customize the aggressiveness of your cloud masking procedure.”

Cloud removal and data replacement are not simple techniques where one algorithm can always provide good results. Different topologies respond differently to types of cloud detection. Algorithmic techniques are theorised by many GIS groups with different needs. One, which hints towards uses beyond just the viewing of pretty earths, is by researchers from the Air Force Institute of Technology and notes that “The classifier is best at enabling analysis and is less suited towards direct actionable intelligence. It would not be appropriate to use the cloud classifier for a go/no-go criteria on a military operation.”38

Cloud removal acts at pixel level, and must include the removal of cloud shadows as well. Cloud removal results in holes – “no-data” areas – in the image. Google has chosen, as Hatcher states above, to fill the holes in with data from previous clear images, with algorithms that blend in at pixel level. As a general solution this has some merit, but for some uses it should be made clear what the make-up of these images is.

Once one is familiar with some of the interventions on the imagery, defects become more noticeable as do the reasons for them.

[GOOGLE: don’t be evil]

While Google at times has aggressively presented itself as a force for good, it remains at base a corporate entity with commercial aims as well as a largely Western focussed or motivated (permanent) workforce. Google risks falling into the role of a naïve tech-bro trying to solve the world’s problems but imposing old cultural imperialist solutions by ignorance.

Google is less championed as a force for good these days, especially after the de-emphasising of ‘evil’ in its motto, but it is still often seen as a more benign company than many. 

The fact that it provides its mapping products for free to most is laudable but it should be noted that such provisions can disappear or change their access at any time, as has happened with many of their products. While this is unlikely with such widely used products as the Maps apps, it is salutatory to see the number of products and services that Google has retired. The site killedbygoogle.com39 is informative on this. Google Earth is getting close to a 20 year anniversary so the assumption can be made that Google still sees commercial worth in an app that they give away.

Google isn’t likely to become evil by intent, but again the embedded nature of its corporate structure can lead to domination in areas it is not competent at. Lisa Parks critiqued its Crisis in Darfur40 in this way.

In 2007, Google partnered with the United States Holocaust Memorial Museum to highlight, by use of the Google Earth platform, the genocidal nature of ethnic cleansing in Sudan. The intent was good – to focus on the way the world ignored one genocide and to attempt to avoid this in future. Parks notes that the project was educational but used satellite imagery more as a backdrop to the emotionally-charged photographs rather than reflecting on practical uses that can be made of satellite imagery in world events. She contrasts this with a report on Darfur by Amnesty International that emphasises much more informative uses of satellite imagery, with attempts to use them to analyse and predict future attack locations. Parks noted that press coverage of the Google project “tended to reduce the political to the visual and encourage a ‘‘seeing is believing” logic. “
Google’s involvement is portrayed as a form of disaster capitalism, where it’s more about associating the brand with worthy projects.

 [Google as colonist, altruist]

It is often claimed that maps have an aura of neutrality, but historically much mapping effort was a result of colonial and territorial needs. Cartographic historian Brian Harley notes “government maps have for centuries been ideological statements rather than fully objective, value free scientific representations…”41

The extent to which Google panders to governments is unclear but the fluctuating ‘ownership’ of Crimea according to one’s point of reference when viewing Google Maps is salutatory. Google says it follows local laws in such matters (along with many tech companies that have international issues). The Guardian reported how, in 2014, Crimea appeared as part of Russia when viewed by Russians and showed a dotted border from the US point of view.42

Doug Specht, senior lecturer at the School of Media and Communications at the University of Westminster, referring to use of satellite data in development and aid, has said “Mapping is an inherently colonial activity, there’s nothing less participatory than using a god like satellite to take images from above, and using that to decide how resources should be distributed”43.

Rupert Allan, former country manager of Humanitarian OpenStreetMap Team says of working with satellite data in underdeveloped countries – “I’m not sure you can talk about the use of satellite imagery and digital technology in developing countries without talking about the premise of old colonial patterns, and the problematics of white males telling subordinated ‘beneficiaries’ how to do things better”.44

One might argue that the empire that Google attempts to encircle with its mapping products is the empire of capital. Google’s intimate knowledge of place is coupled with its intimate knowledge of tax laws pertaining to places.

In a GeoHumanities article – ‘Standardization, Censorship, Systems, Surveillance: Artist Perambulations Through Google Earth’45 – Assistant Professor of Art History Ila Nicole Sheren looks at artists’ Google works through the lens of ‘evil media’ contending that “the visuals of Google Maps, Earth, and Street View act on the software’s myriad users, naturalizing their positions within the postcolonial, late capitalist financial system. The end user ultimately accepts locational tracking and corporate surveillance in exchange for seamless integration of the different platforms and the freedom to explore virtual space.”

Aboriginal artist Jahkarli Romanis started exploring her childhood Wadawurrung Country on Google Earth in 202046 during COVID and reacted with anger to the copyrighting of imagery and lack of acknowledgement of indigenous custodianship, accusing Google of recreating a new Terra Nullius.

Whatever notions of good that Google brings to mapping (and there are many), it imposes a single view in the generally available tools. It also controls what and how layered information can be overlaid and added. Although Google did allow users to add data and make corrections to maps at one time through the Map Maker tool, this was removed in 2017, incidentally with much of the user data subsumed into the products. 

Writer Karen Emslie comments thatCartography allowed colonial governments to carve up indigenous lands, and mapping remains a tool of both recognition and suppression. Google Maps is a present-day manifestation of a centuries-old political impulse to visualize territory.”47 She highlights an alternate system called LandMark that allows indigenous communities to plot community land.

When companies have a corporate focus that comes from outside, it’s not hard to find examples on how they fail with local and indigenous needs. 

In Aotearoa, the poor pronunciation of most Maori place-names in most map-based products is a case in point.

It’s one that Vodafone and Google used as a marketing campaign with their 2017 Say it Tika ad campaign. Google collected over 67,000 audio pronunciation corrections on a digital map as part of the public campaign, but re:news reported in 2020 that nothing was ever done with this data due to language model incompatibilities.48

The award winning ads for Say it Tika can still be found six years later, along with the ad agency FCB reporting that “Vodafone and Google then worked with linguists to fix these phonetic errors.”49 In fact Vodafone stated that Google very quickly told them that they couldn’t follow through and apparently dropped interest in any fixes. But somehow, in 2018, FCB was celebrating the success.

 Say it Tika is an example where outsiders who may mean well don’t integrate with community and fail in a  ‘white knight’ spectacle. It’s a forced attempt at whanaungatanga50 without actually having ongoing community voices.  Te Hiku Media CEO Peter-Lucas Jones was quoted in To Ao Maori News in 2022, saying “You can’t just say it tika … you’ve got to do it tika too.”51

In terms of doing good, the offshoot of philosophical utilitarianism, Effective Altruism, seems to have similar cultural blind spots and some at Google appear to align strongly with it. President of the 2015 EA Global conference, Tyler Alternam, noted that “I would say there are more effective altruists at Google than any other company in the world”52.

Vox writer Dylan Matthews (who also aligns with EA) comments that “effective altruism can’t just be for white male nerds on the autism spectrum.” This, after attending the EA conference hosted at Google’s Quad Campus. He made the point that many attendees were starting to focus heavily on the human consequences of an AI apocalypse and noted that, in comparison, “multiple attendees said, global poverty is a ’rounding error’.”53

EA at a basic level has some useful things to bring to debates on charity and giving, but has quickly grown to have delusions of grandeur and a myopic focus on quantification which lead it to ignore many existing (if not fully effective) structures that know an awful lot about the more difficult aspects of ‘fixing the world’, especially the social, cultural and political effects that make aid more than just a technical or financial problem.

Frankly it’s frightening to hear people like ex-Google founder Larry Page express frustration with those who suggest that technology doesn’t provide solutions on its own. Especially when they have the money and power to enact their ways of approaching solutions. 

The Financial Times reported in an interview with Page in 201454 that he saw AI rapidly making most jobs redundant. He only saw this as a boon since, as he claimed, 9 out of 10 people wouldn’t want to be doing their jobs. This standard “move fast and break things” attitude to work (whatever you think about the future of work) suggests that he sees most people being able to weather such radical change. It makes it worrying to then consider how such attitudes would accompany Page’s aspirations for Google as expressed in that FT item – “We could probably solve a lot of the issues we have as humans.”  

[Labour and Ghost Work]

Speaking of work, the use of inadvertent labour often goes unnoticed. An early example was the visual reCAPTCHA service (that we all forceably participated in), that was used as a way of visually interpreting objects in space – those for instance, that were captured by Street View cameras55.

Some basic corrective work on mapping and 3D errors is, surprisingly, done by senior staff. But what gives lie to the seeming simplicity of ‘springtime everywhere’ are the examples of the humans involved in Ghost Work – the innumerable, disposable, unnamed and unlocated workforce that underpins the working of machine learning systems. The term arises from the 2019 book, ‘Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass’56, coined by anthropologist Mary L. Gray and computer scientist Siddharth Suri, where they describe it as “arguably the dismantlement… of full-time employment itself.”

The book interviews these workers and shows how they attempt to take back some control in surprising ways, but their reality is still low paid, often exploitative and unstable work that is purposely hidden from view.

The manual processes behind algorithms at the heart of Google Earth (and other supposedly automated systems) are almost invisible. Often humans are much cheaper processing machines than software and hardware. The focus on data cleaning and classification is one area where machine-learning needs help, as is feature recognition along with error correction and countless other tasks. The construction of datasets requires initial and on-going manual input by real people.

It might be thought that once initial training of a machine learning algorithm is complete, humans are no longer needed. But in Ghost Work it’s shown that such labour is needed in multiple rounds of training, and must sometimes be inserted into the working, automated systems too.  Geospatial Specialist and podcaster Daniel ODonohue, in reference to creating quality training data, wonders57 “if the human role in this process can be removed. That is unlikely, as introducing human logic and thinking into the system is necessary in order to make sure the end product is meaningful to humans… computers … will take short cuts and make interpretations that make no sense to people, and can render the output useless.”

[Maven and the military -The other face of Pretty Earth]

 “Pretty Earth” is the public face that we as non-paying consumers get. Writer and curator Lara Chapman, posits58 that Google is also selling versions of Google Earth for military and intelligence purposes (and likely, with higher resolution and more militarily useful data layers).  After all, if the algorithms are coded generically, they become transportable. Building footprint algorithms used in US cities could easily be set to work on datasets of enemy buildings in other countries with just a small amount of extra training (and ghost work).

Chapman also outlines a telling 2018 article in The Guardian59 discussing the acquisition of Keyhole, the company that developed the initial workings of Google Earth. Via this acquisition Google also acquired an employee with significant connections to the CIA, and who, it’s reported, was a Google ‘evangelist’ for connecting solutions to the intelligence and defence communities.

While its founding motto “don’t be evil” was still a thing, Google expanded its military contracting significantly. The article outlines CIA and NSA contracts for customised search tools and how Google was given a $27 million contract because it had already worked for years with the National Geospatial-Intelligence Agency (NGA) “building Google Earth technology according to its needs”.  From Freedom of Information requests the article shows that “Google has been doing brisk business selling Google Search, Google Earth and Google Enterprise … products to just about every major military and intelligence agency”.

Recent protests over Google’s controversial involvement in military products both for the US and for other countries such as Israel show that Google struggles with a dilemma between its staff and “don’t be evil”, against its need to win contracts and make money. Google’s employees clearly understand how their algorithmic work – some of which powers Google Earth – can be militarised.

Project Maven is one example. Maven is a US military project that was to use machine learning to analyse and annotate drone footage. Google’s involvement in 2018 was halted after 3000 employees petitioned it and some resigned. The data analysis in Maven appears to be similar to object recognition systems that were developed for Google Earth and Maps – that is, the spotting, tracking and labelling of people, vehicles and buildings with map overlays.

In considering how to announce Maven involvement, top executives were quoted as saying to ‘Avoid at ALL COSTS any mention or implication of AI”, noting that weaponised AI is one of the most sensitive topics for the public.60

Wired magazine61 explains the outcome of protests was that “Google CEO Sundar Pichai offered guidelines for how Google will—and won’t—use the technology. One thing Pichai says Google won’t do: work on AI for weapons. But the guidelines leave much to the discretion of company executives and allow Google to continue to work for the military.”

Wired also quoted Peter Eckersley, chief computer scientist at the Electronic Frontier Foundation, that “If any tech company is going to wade into a morally complex area like AI defence contracting, we’d recommend they form an independent ethics board to help guide their work.”

It’s not clear if Google took this advice. It has an in-house ethics board but in 2021 fired two of its outspoken leaders, with several others leaving in protest, and apparent confusion and disarray in those that remain. As a result, it lost reputation in the wider research community, which remains suspicious of it.

There have been multiple staff protests over Google’s involvement with Project Nimbus, which gives huge AI and cloud computing resources to branches of the Israel Defence Forces. In 2021 Google and Amazon employees called for both companies to pull out of the project62. They described the project as providing surveillance and unlawful data collection on Palestinians. A Nation article63 describes Nimbus also as increasing IDF competence in AI as well as helping illegal settlement expansion.

Seperately but chillingly, The Jerusalem Post reported in 202164 that the Israeli military themselves described the 11-day bombing of Gaza then, as the world’s first “AI war”. The detailed article shows that development was already underway on surveillance and target recommentation. A senior IDF Intelligence officer said “For the first time, artificial intelligence was a key component and power multiplier in fighting the enemy”.

With this background, it’s informative to look at very recent documentation of Israel’s use of AI and machine learning in its war in Gaza, in relation to Nimbus and Maven. In April 2024, 972mag interviewed Israeli intelligence officers about the systems in use.65

A tool called Lavender creates rankings on most of the Palestinian population based on the likelihood of them being even low level members of Hamas. The rankings rely on mass surveillance of the population, where data is collected on cell phone use and ownership, social media such as whatsapp groups, residence and other locality information, links to militants and other data points that may or may not be accurate. Lavender can be used to generate lists of individuals who might become targets, but is said to have 10% inaccuracy. Individuals often have an estimated civilian death toll count listed for if they are targeted for bombing.

The tool dubbed ‘Where’s Daddy’ can simultaneously track thousands of individuals, identify and notify when they enter their place of residence, marking that house for bombing. Another tool, The Gospel, marks buildings that militants may be using. 

All of these systems are at least partly autonomous machine learning tools and it appears that massively invasive surveillance processes are at work.

After the abhorrent Hamas attack on Israel on the 7th Oct 2023, the use of these systems was increased and generated a huge number of target suggestions, where previously target selections were a bottleneck.  

A Nation article66 on Nimbus also refers to the US company Palintir being “responsible for most of the targeting in Ukraine” in that country’s defence from Russia’s invasion, as well as similar work in Israel. Catherine Connolly, from a coalition of human rights groups calling out automated warfare, postulates that prototyping and training of targeting algorithms could be tested in such wars where less outside oversight is present. In the Nation article, she asks “how precise .. can you know a system is going to be unless it’s already been trained and tested on people?”.    

It appears that Google and Amazon staff are aware of the implications of giving such AI and machine learning capabilities over to governments for use in military operations. Similarly, the descriptions both of surveillance and locational targeting imply exactly the kinds of algorithms that Google has excelled at showing off in ‘Pretty Earth’ and Maps. Since Google has given signs that it doesn’t take these kinds of contracts, nothing more can be implied, but it remains chilling that a company with such technical expertise with locational data and processing techniques has carefully crafted a statement to employees that allows it to continue government and military work. 

How could this array of concerns be addressed? Transparency in various areas would help enormously in gaining some public trust but such transparency might damage commercial models that Google follows. 

Certainly it would be fair to ask for much more detail on the collection, processing and distribution of satellite and other images that make up Google Earth. This might not matter for the general public, but would allow specialists in GIS, geography, visual theorists and other areas to be able to more accurately assess the implications of the algorithms that work over the many areas involved.

Although Google allows unprecedented access to some of its tools such as Google Earth Engine, which allow much independent research, it also is prone to cultural myopia and control in other areas such as its approach to philanthropy. This is a more difficult and subtle concern.

The apparent devaluing of the “don’t be evil” motto is probably more significant than Google anticipates, and coupled with the loss of trust in its ethical oversight along with less than transparent association with military contracts, raises numerous valid concerns.

Finally, and not really addressed here, but of increasing significance are the environmental effects of the processing, storage and other work that are involved with Google Earth. The Guardian surveyed climate scientists from the Intergovernmental Panel on Climate Change in May 2024, finding that almost 80% discount the possibility of staying under the 1.5° celsius target and instead expect a potentially dystopian 2.5° increase.67 Google releases a lot of information on its climate footprint and claims to have been carbon-neutral for some years, but there is a huge difference between offsetting and actually reducing power requirements and CO2 outputs. Although Google says it is transitioning to renewable energy, it still contracts to fossil fuel companies and funds politicians and lobbyists who deny climate change, something its employees have again protested about.

Conclusion

Maps, by nature of content and usage have the potential to work for dominant power interests and diminish the stories of those without access to primary production and distribution. 

Google has arguably cornered the mapping market and as such wields massive power. It’s also notably had a number of internal criticisms that challenge the ethos of its original “don’t be evil” motto, which has arguably been pushed to the background.

The use of algorithms that are in use in many places in the collection, processing and distribution of these photographically visual maps are often not open, but possess the power to manipulate and change primary data that is then displayed. 

The aerial photographic nature of Google Earth lends it credibility and authority that should not so easily be given, but the quotidian uses of its products lure users towards a perception of perfection. Particularly with the knowledge of pixel level and temporal interventions such as cloud-removal we should consider critically what this means for the different use cases of satellite imagery.

As with many complex processes and topics, we do well to consider these aspects from many different cultural angles if we are not to cede control completely to what might be an ideological construct that may not, in the end, have our collective best interests at heart.

Stu Sontier April/June 2024

3: How to Lie with Maps – Mark Monmonier. University of Chicago Press (1991, 1996)