Onedrive Photo Captions and Tags

It has been a while I have used the OneDrive Windows 10 app, and since that time to my amazement it has been updated with some features to bring it at par with some of the functionalities available with the OneDrive Web App.

Among these, is the ability to add Captions and Tags to photos. There was already a way to add captions and tags to photos via the OneDrive website, surprisingly the Android app still does not provide a way to view nor edit captions.

OneDrive_Windows10_App.png

Something which I like with OneDrive in particular is that it writes the captions and tags back to the photo metadata, which is something important for many archivists. Other online services do not tie nor allow an easy way to export this information easily. For example, you can export the file metadata from Google Photos as a separate file using Google Takeout, and Flickr provides a similar functionality. Still, after exporting your photos and metadata, writing this information to the image file is not so easy. So OneDrive clearly has an advantage for this type task as it could tie in nearly seamlessly with your photo management workflow have on your PC.

 

Integrating with your Photo Management Workflow

Photos stored in OneDrive can be synced back to your PC along with the captions and tags and can be read and edited using your preferred photo management application. It is worth it to understand how the captions and tags are read and written to in each application to avoid any conflicts or out of sync metadata, as even though applications may follow common standards their behavior can vary. OneDrive seems to follow the same behavior as in Windows Photo Gallery which I discussed in my post entitled Accessing Windows Photo Gallery Metadata using Exiftool.

XNView_IPTC.png
XNView Info Pane displays Caption and Tags added in OneDrive. This is because the information is saved back to the file metadata and available to those applications which support them.
Windows_File_Explorer_Photo_Properties.png
Windows File Explorer Properties display the OneDrive caption in the Title and Subject fields.

 

OneDrive Auto Tags

OneDrive also has auto tags, which are tags added using Microsoft’s computer vision services. These tags are not written to the file, just displayed. In the screenshot provided, OneDrive tagged the image with “Sky”, “Outdoor” and “Building”.  I added the “landmark” tag manually. Any tag added by the user however, will be written back to the file. Flickr has a similar feature, but differentiates between the automatically added tags with the user added ones by changing the tag appearance, which in my opinion is a better design. If you wish to write the tags back to the file, I wrote about a workaround in the post – Saving OneDrive Photo Auto Tags to the file metadata.

WPG_Info_Pane.png
Windows Photo Gallery displays captions and tags added using OneDrive.

 

 

 

 

 

Medición Neta con la Autoridad de Energía Eléctrica (AEE)

Con los aumentos esperados en las tarifas de energía decidimos hace un tiempo en instalar un sistema fotovoltaico (PV) en nuestra residencia, uniéndose a los mas de 16,000 residencias con sistemas fotovoltaicos en Puerto Rico. La instalación del equipo fue relativamente sencilla. No tan sencillo fue el proceso burocrático para interconexión con la Autoridad de Energía Electrica (AEE), pero eso es otro tema. En fin se logro activar la facturación de medición neta.

Hay varias maneras en que se puede configurar el sistema PV en un hogar. Puede ser una combinación con baterias y/o con medición neta. En nuestro caso y luego de un analisis, optamos por solo medición neta. En esta configuración el exceso de energía es “exportado” a la AEE, el cual lo acredita en la factura. Este credito, es utilizado en momentos donde el PV no esta produciendo energía, como es en la noche.

Esto se puede visualizar mejor con esta toma de pantalla de la aplicación de monitoreo muestra un dia típico en el cual no hay consumo durante el dia debido a que estamos en nuestro lugar de trabajo pero el sistema PV produce y exporta energia a la AEE. En la medición para ese dia, el consumo neto fue -7.70 kWh, es decir la AEE recibio mas energia que la consumida.

Screenshot_20191201-083222.png

Azul: Energia producida. | Anaranjado: Energia consumida. | Gris: Consumo Neto

La AEE se benefician de distintas maneras como:

  1. El costo del equipo lo asume el consumidor. La AEE no tiene que invertir en compra de las placas solares ni en la construcción de plantas de generación.
  2. La AEE no tiene que invertir recursos o combustible en la energia que recibe del PV de consumidor. Esto es importante, debido a que cuando el sol esta brillando son los momentos “pico” de consumo de energia y la AEE a veces recurre a combustibles mas caros como el diesel para satisfacer la demanda. La energía que recibe de los PV es “gratis” y es revendida por la AEE. Energía que se acredita al consumidor usualmente es producida por fuentes mas económicas por la noche.
    Pueden ver informes actualizado del costo de energía por unidad aqui.
  3. La generación de energía está mas cercana al punto de consumo. La transmissión de energía del punto de generación (Ej. La planta de Aguirre) al punto de consumo (Ej. Tu hogar) conlleva un gasto y perdida de energía en lo que llega a tu hogar. Puede resultar entre 5% a 8% de perdida. Mientras mas lejos estan, mas energía se pierde, por lo que hay un ahorro.
    Lea: We calculated emissions due to electricity loss on the power grid
  4. Protección del medio ambiente. Esta quizás es la mas importante, debido a que se reduce el impacto ambiental causado por la extracción, quema, transporte y almacenamiento de combustibles fósiles como el petróleo y gas natural. Esto tambien se traduce a menos riesgo de cumplimiento a la leyes ambientales a la AEE, que le ha resultado en multas a la AEE.Pueden ver el Resumen de Generacion de la AEE actualizado.

    AEE: Interconexión de GD y Medición Neta

Un buen recurso para entender mas sobre asuntos de generacion solar en su hogar en Puerto Rico es el canal de YouTube: “Lino te lo dice”

OpenStreetMap aided important post hurricane mortality study Puerto Rico

A recent study published in the New England Journal of Medicine is making the news- Mortality in Puerto Rico after Hurricane Maria

The study puts the number at around 4,600 deaths which may be attributable to the hurricanes (Irma and Maria) which damaged Puerto Rico’s infrastructure. The storms left a significant portion of the island without electricity, potable water, and communications for an extended period of time. Access to supplies and movement was hampered by landslides, damaged roads and bridges.

While there is no dispute in the sharp uptick in overall deaths in Puerto Rico in the months immediately following the storms, linking the deaths to the storm has been a contentious issue. In the days and weeks following the storm hospitals were either functioning with reduced capacity or none at all. Government resources understandably were directed to recovery efforts, so counting the dead was not a top priority. The way that the government tallies the dead does not help either as storm related deaths are counted only if they are certified as directly caused the storm – dying after getting hit by flying debris during the storm counts. Yet, an elderly diabetic person who’s insulin spoiled because of a lack of refrigeration and died because his glucose got out of whack is considered to have died of complications from diabetes. A person with a respiratory infection may have gone without early treatment because of damaged roads and died as a result of complications – died of respiratory disease not because of the storm. No communications meant no 911. Such conditions, also takes a toll on mental health, so suicide rates also increased during this period.

171205-9296-1
Months after the hurricane, certain roads were still susceptible to landslides and communities lacked electricity, running water and communications.

OpenStreetMap data aided this study in ways not possible only a few years ago. On a recent radio interview, Domingo Marqués, one of the study’s authors, said that without the map density data this study would not have been possible¹. Thanks to a worldwide push lead by the Humanitarian OpenStreet Maps Team and other contributors holding map-a-thons and working individually – sufficient map data for Puerto Rico was largely available by the time the study needed it. For this study in particular, OSM information was used for selecting the sample.

From the study:
Sampling buildings using OpenStreetMap
Households within barrios were identified using OpenStreetMap (OSM) layers for structures identified as “buildings”. For each randomly selected barrio, we iteratively downloaded structure information using the OSM overpass API, calculated centroids for structures identified as buildings, and randomly sampled 35 locations. We generated geospatial PDFs for each barrio level with an OSM base layer, a barrio boundary and the sampled building points. The geospatial PDFs were loaded on Samsung Tab A 7” Android devices and displayed using PDFMaps. Enumerators were trained to load maps, identify their position and navigate using these geospatial PDFs.

The study thanked OSM contributors and serves as another example to OpenStreetMap’s usefulness.

171021-9034
Map-a-thon’s like this one held in San Juan, Puerto Rico helped improve OpenStreetMap data

As somber as this mortality study is, it can give us hope for better responding for catastrophes in the future by understanding how these deaths occurred. Traditional hurricane preparedness centered towards seeking shelter away from areas prone to flooding. Analysis of the causes of these fatalities along with OpenStreetMap may change this thinking. A location may not be prone to flooding, but may be still vulnerable because of landslide cutting off the only access road. Medicine, Potable Water and other supplies could be pre-positioned prior to the storm’s arrival and tailored on demographic figures to better serve communities which may have a hard time evacuating. After a storm, rapid post disaster data analysis can optimize relief resources to people in need. Temporary clinics could be set quickly up after a storm in critical locations to tend people suffering from chronic diseases or respiratory disease in order to avoid any complications which could lead to death.

OpenStreetMap and the continuing contribution its volunteers in drawing and identifying buildings and other physical features will hopefully play a role in many other studies and applications in understanding this disaster and preparing for future ones worldwide.

¹ 5/30/2018 AM 810 Fuego Cruzado interview

Finding where a photo was taken using Google Vision

Scanning photos taken by grandparents on a trip to Europe in 1960 is quite fascinating. Seeing the places they visited, some which I have also had the opportunity to visit as well, is a nice way to remember them. However, the photos presented a bit of a challenge- while some had handwritten captions mentioning the location they were taken, most lacked this information.

Modern digital cameras embed metadata into an image file with details such as the date/time taken and geolocation. These bits of information make it easier to know where a photo was taken. So with these analog photos- I have only a black and white image of an unknown place.

1960’s photos meet Artificial Intelligence

Luckily, modern image recognition techniques can identify landmarks. Google’s Vision API can identify an image’s location by using AI smarts to compare them to a vast image dataset. As more images are fed, Google Vision “learns” more about landmarks and the objects contain within the images. While the API is targeted for application developers, Google provides a web site in which you can upload an image and it returns this information as well.

Google Vision API Website
The Google Vision API provides a way of identifying landmarks from photos. In this example, the service correctly identified the photo as taken at the Buen Retiro Park in Madrid, Spain – https://cloud.google.com/vision/

In mobile devices, Google Lens also uses this technology to recognize landmarks and present relevant information.

Google Lens
Google Lens uses Google Vision to identify landmarks from images.

 

Adding location information to the image file metadata

After positively identifying the landmark and geographical location where the photo was taken using Google Vision, I turn to GeoSetter for adding the missing geolocation information as well as captions to the image file metadata. Adding this information allows other applications and services to use it as well.

 

GeoSetter
Adding geolocation information to an image file using GeoSetter – http://www.geosetter.de/en/main-en/

 

OneDrive
Applications or services as OneDrive can use geolocation data to display the location on a map. The caption is also displayed showing the name of the monument as it was identified with the help of Google’s Landmark Recognition engine.
2018-05-22 (1).png
Geotagged photo shown in Windows Photo Gallery – the added metadata of the location taken is shown as a Geotag.

Related: