Part 3: Tag Maps Clustering and Topic Heat Maps

Workshop: Social Media, Data Analysis, & Cartograpy, WS 2022/23

Alexander Dunkel, Madalina Gugulica, Institute of Cartography, TU Dresden

This is the third notebook in a series of four notebooks:

  1. Introduction to Social Media data, jupyter and python spatial visualizations
  2. Introduction to privacy issues with Social Media data and possible solutions for cartographers
  3. Specific visualization techniques example: TagMaps clustering
  4. Specific data analysis: Topic Classification

Open these notebooks through the file explorer on the left side.

Introduction: Privacy & Social Media

The task in this notebook is to extract and visualize common areas or regions of interest for a given topic from Location Based Social Media data (LBSM).

On Social Media, people react to many things, and a significant share of these reactions can be seen as forms of valuation or attribution of meaning to to the physical environment.


However, for visualizing such values for whole cities, there are several challenges involved:

  • Identify reactions that are related to a given topic:
    • Topics can be considered at different levels of granularity
    • and equal terms may convey different meanings in different contexts.
  • Social Media data is distributed very irregularly.
  • Data peaks in urban areas and highly frequented tourist hotspots,
  • while some less frequented areas do not feature any data at all.
  • When visualizing spatial reactions to topics, this unequal distribution must be taking into account, otherwise maps would always be biased towards areas with higher density of information.
  • For this reason, normalizing results is crucial.
  • The usual approach for normalization in such cases is that local distribution is evaluated against the global (e.g. spatial) distribution data.
Tag Maps Package
  • The Tag Maps package was developed to cluster tags according to their spatial area of use
  • Currently, the final visualization step of Tag Maps clustering (placing labels etc.) is only implemented in ArcMap
  • Herein, we instead explore cluster results using Heat Maps and other graphics in python
  • In the second part of the workshop, it will be shown how to visualize the actual Tag Maps
Where is my custom area/tag map?
  • Below, you can chose between different regions/areas to explore in this notebook, provided based on student input
  • The final Tag Map visualization step will be done in the second part of the workshop
  • This will be done in ArcMap. However, it is possible to visualize TagMaps in jupyter Lab, too. The notebook is not yet ready - have a look at the work in progress here


Define output directory

In [1]:
from pathlib import Path
OUTPUT = Path.cwd() / "out"

Temporary fix to prevent proj-path warning:

In [2]:
import sys, os
os.environ["PROJ_LIB"] = str(Path(sys.executable).parents[1] / 'share' / 'proj')

Create 01_Input folder, which will be used for loading data.

In [3]:
INPUT = Path.cwd() / "01_Input"

Load the TagMaps package, which will serve as a base for filtering, cleaning and processing noisy social media data

In [4]:
from tagmaps import TagMaps, EMOJI, TAGS, TOPICS, LoadData, BaseConfig
from tagmaps.classes.shared_structure import ItemCounter
In [5]:
%load_ext autoreload
%autoreload 2
In [6]:
import sys

module_path = str(Path.cwd().parents[0] / "py")
if module_path not in sys.path:
from modules import preparations
from modules import tools
# enable ignore shapely deprecation warnings
import warnings
from shapely.errors import ShapelyDeprecationWarning
warnings.filterwarnings("ignore", category=ShapelyDeprecationWarning) 

Load Data & Plot Overview

Retrieve sample LBSN CSV data:

In [7]:
source = ""
# source = ""
# source = ""
# source = ""
# source = ""
# source = ""
# source = ""
In [8]:
# clean any data first
sample_url = tools.get_sample_url()
lbsn_dd_csv_uri = f'{sample_url}/download?path=%2F&files='
Loaded 35.86 MB of 35.87 (100%)..
Extracting zip..
Retrieved, extracted size: 100.03 MB
CPU times: user 1.74 s, sys: 864 ms, total: 2.6 s
Wall time: 9.95 s

Initialize tag maps from BaseConfig

In [9]:
tm_cfg = BaseConfig()
Loaded 337 stoplist items.
Loaded 214 inStr stoplist items.
Loaded 4 stoplist places.
Loaded 3 place lat/lng corrections.

Optionally, filter data per origin or adjust the number of top terms to extract:

In [10]:
tm_cfg.filter_origin = None
tm_cfg.max_items = 3000
In [11]:
tm_opts = {
tm = TagMaps(**tm_opts)

a) Read from original data

Read input records from csv

In [12]:
from IPython.display import clear_output
In [13]:
input_data = LoadData(tm_cfg)
with input_data as records:
    for ix, record in enumerate(records):
        if (ix % 1000) != 0:
        # report every 1000 records
        print(f'Loaded {ix} records')
Loaded 244000 records
Cleaned input to 18229 distinct locations from 244942 posts (File 2 of 2) - Skipped posts: 0 - skipped tags: 146435 of 1596294

Total post count (PC): 244942
Total tag count (PTC): 1596294
Total emoji count (PEC): 168177
Long tail removal: Filtered 336 Emoji that were used by less than 2 users.
Long tail removal: Filtered 157643 Topics that were used by less than 5 users.
CPU times: user 53 s, sys: 1.7 s, total: 54.7 s
Wall time: 53.9 s

b) Optional: Write (& Read) cleaned output to file

Output cleaned data to Output/Output_cleaned.csv,
clean terms (post_body and tags) based on the top (e.g.) 1000 hashtags found in dataset.

In [14]:
Writing cleaned intermediate data to file (Output_cleaned.csv)..

Have a look at the output file.

In [15]:
import pandas as pd

file_path = Path.cwd() / "02_Output" / "Output_cleaned.csv"
display(pd.read_csv(file_path, nrows=5).head())
print(f'{file_path.stat().st_size / (1024*1024):.02f} MB')
origin_id lat lng guid user_guid loc_id post_create_date post_publish_date post_body hashtags emoji post_views_count post_like_count loc_name
0 1 51.033300 13.733300 3722a733575bfbd6c173dc2deadb795b 3722a733575bfbd6c173dc2deadb795b 51.0333:13.7333 NaN NaN fürstenzug NaN NaN 704 0 Fürstenzug
1 1 51.033300 13.733300 ac28e6f721e25624d76a9eb3afc61b4f ac28e6f721e25624d76a9eb3afc61b4f 51.0333:13.7333 NaN NaN NaN NaN NaN 194 0 Hochschulstraße
2 1 51.033300 13.733300 d71246d019402566900f0d59d0c9fbef d71246d019402566900f0d59d0c9fbef 51.0333:13.7333 NaN NaN dresdner;stadtfest NaN NaN 181 0 Dresdner Stadtfest
3 1 51.031945 13.730668 0ff8be1630ab8decae97efecfda9f5f3 0ff8be1630ab8decae97efecfda9f5f3 51.031945353:13.7306680064 NaN NaN club NaN NaN 228 0 Club 11
4 1 51.031224 13.730091 aab11415bd4546adef5f513ac00edc90 aab11415bd4546adef5f513ac00edc90 51.0312241:13.7300907 NaN NaN international;house NaN NaN 66 0 International Guest House
46.02 MB

Read from pre-filtered data

Read data form (already prepared and filtered) cleaned output

In [16]:
CPU times: user 2.79 ms, sys: 0 ns, total: 2.79 ms
Wall time: 2.8 ms
In [17]:
Total user count (UC): 103684
Total user post locations (UPL): 168815
Number of locations with names: 2294
Total distinct tags (DTC): 174260
Total distinct emoji (DEC): 2003
Total distinct locations (DLC): 18229
Total tag count for the 3000 most used tags in selected area: 1778500.
Total emoji count for the 1667 most used emoji in selected area: 167809.
Bounds are: Min 13.727952 51.0257536334 Max 13.78818 51.051867

Topic selection & Tag Clustering

The next goal is to select reactions to given topics. TagMaps allows selecting posts for 4 different types:

  • TAGS (i.e. single terms)
  • EMOJI (i.e. single emoji)
  • LOCATIONS (i.e. named POIs or coordinate pairs)
  • TOPICS (i.e. list of terms)

Set basic plot to notebook mode and disable interactive plots for now

In [18]:
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
mpl.rcParams['savefig.dpi'] = 120
mpl.rcParams['figure.dpi'] = 120

We can retrieve preview maps for the TOPIC dimension
by supplying a list of terms.

For example "park", "green" and "grass" should
give us an overview where such terms are used
in our sample area.

In [19]:
nature_terms = ["park", "grass", "nature"]
fig = tm.get_selection_map(
    TOPICS, nature_terms)
In [20]:
urban_terms = ["strasse", "city", "shopping"]
fig = tm.get_selection_map(
    TOPICS, urban_terms)


We can visualize clusters for the selected topic using HDBSCAN.

The important parameter for HDBSCAN is the cluster distance,
which is chosen automatically by Tag Maps given the current scale/extent of analysis.

In [21]:
# tm.clusterer[TOPICS].cluster_distance = 150
fig = tm.get_cluster_map(
    TOPICS, nature_terms)

We can get a map of clusters and cluster shapes (convex and concave hulls).

In [22]:
fig = tm.get_cluster_shapes_map(
    TOPICS, nature_terms)
--> 31 cluster.

Behind the scenes, Tag Maps utilizes the Single Linkage Tree from HDBSCAN to cut clusters at a specified distance. This tree shows the hierarchical structure for our topic and its spatial properties in the given area.

In [23]:
fig = tm.get_singlelinkagetree_preview(
    TOPICS, nature_terms)

Cluster centroids

Similarly, we can retrieve centroids of clusters. This shows again the unequal distribution of data:

In [24]:
fig = tm.clusterer[TOPICS].get_cluster_centroid_preview(
    nature_terms, single_clusters=True)
(1 of 16617) Found 2928 posts (UPL) for Topic 'park-grass-nature' (found in 6% of DLC in area) --> 31 cluster.

Heat Maps

  • Visualization of clustered tagmaps data is possible in several ways.
  • In the second workshop (end of January), we are using ArcMap, to create labeled maps from clustered data
  • The last part in this notebook will be to use Kernel Densitry Estimation (KDE) to create a Heat Map for selected topics.

Load additional dependencies

In [25]:
import numpy as np
from sklearn.neighbors import KernelDensity

Load Flickr data only

Instagram, Facebook and Twitter are based on place-accuracy, which is unsuitable for the Heat Map graphic.

We'll work with Flickr data only for the Heat Map.

  • 1 = Instagram
  • 2 = Flickr
  • 3 = Twitter

Reload data, filtering only Flickr

In [26]:
tm_cfg.filter_origin = "2"
tm = TagMaps(**tm_opts)
In [27]:
input_data = LoadData(tm_cfg)
with input_data as records:
    for ix, record in enumerate(records):
        if (ix % 1000) != 0:
        # report every 1000 records
        print(f'Loaded {ix} records')
Loaded 31000 records
Cleaned input to 15951 distinct locations from 31309 posts (File 2 of 2) - Skipped posts: 213572 - skipped tags: 36969 of 232936

Total post count (PC): 31309
Total tag count (PTC): 232936
Total emoji count (PEC): 0
CPU times: user 7.18 s, sys: 283 ms, total: 7.46 s
Wall time: 7.19 s

Get Topic Coordinates

The distribution of these coordinates is what we want to visualize.

In [28]:
Long tail removal: Filtered 1959 Tags that were used by less than 5 users.
Long tail removal: Filtered 13164 Topics that were used by less than 5 users.

Topic selection

In [29]:
topic = "grass-nature-park"
points = tm.clusterer[TOPICS].get_np_points(

Get All Points

In [30]:
all_points = tm.clusterer[TOPICS].get_np_points()

For normalizing our final KDE raster, we'll process both, topic selection points and global data distribution (e.g. all points in the dataset).

In [31]:
points_list = [points, all_points]
  • The input data is a simply list (as a numpy.ndarray) of decimal degree coordinates
  • each entry represents a single user that published one or more posts at a specific coordinate
In [32]:
print(f'Total coordinates: {len(points)}')
[[13.766544 51.036369]
 [13.755054 51.039289]
 [13.753187 51.043785]
 [13.76133  51.038574]
 [13.762543 51.037939]]
Total coordinates: 788

Data projection

  • For faster KDE, we project data from WGS1984 (epsg:4326) to UTM
  • this allows us to directly calculate in euclidian space.
  • The TagMaps package automatically detected the most suitable UTM coordinate system,
  • for the Großer Garten sample data, this is Zone 33N (epsg:32633)

Project lat/lng to UTM 33N (Dresden) using Transformer:

In [33]:
for idx, points in enumerate(points_list):
    points_list[idx] = np.array(
                point[0], point[1]
        ) for point in points])
crs_wgs = tm.clusterer[TOPICS].crs_wgs
crs_proj = tm.clusterer[TOPICS].crs_proj
print(f'From {crs_wgs} \nto {crs_proj}')
From epsg:4326 
to epsg:32633
[[ 413517.95931985 5654593.12815275]
 [ 412717.86708307 5654931.37875778]
 [ 412595.43876854 5655433.54329967]
 [ 413156.52117809 5654844.455095  ]
 [ 413240.37616266 5654772.41458472]]
CPU times: user 124 ms, sys: 2.02 ms, total: 126 ms
Wall time: 124 ms

Calculating the Kernel Density

To summarize sklearn, a KDE is executed in two steps, training and testing:

Machine learning is about learning some properties of a data set and then testing those properties against another data set. A common practice in machine learning is to evaluate an algorithm by splitting a data set into two. We call one of those sets the training set, on which we learn some properties; we call the other set the testing set, on which we test the learned properties.

The only attributes we care about in training are lat and long.

  • Stack locations using np.vstack, extract specific columns with [:,column_id]
  • reverse order: lat, lng
  • Transpose to Rows (.T), easier in python ref
In [34]:
xy_list = list()
for points in points_list:
    y = points[:, 1]
    x = points[:, 0]
    xy_list.append([x, y])
In [35]:
xy_train_list = list()
for x, y in xy_list:
    xy_train = np.vstack([y, x]).T
In [36]:
[[5654848.52605509  411157.72099359]
 [5655013.10449734  411199.12143   ]
 [5655062.51267723  411094.43114165]
 [5655302.64257714  411360.89388318]
 [5654864.8466037   411463.97303812]
 [5655317.87392501  411322.44768116]]

Get bounds from total data

Access min/max decimal degree bounds object of clusterer and project to UTM33N

In [37]:
lim_lng_max = tm.clusterer[TAGS].bounds.lim_lng_max
lim_lng_min = tm.clusterer[TAGS].bounds.lim_lng_min
lim_lat_max = tm.clusterer[TAGS].bounds.lim_lat_max
lim_lat_min = tm.clusterer[TAGS].bounds.lim_lat_min

# project WDS1984 to UTM
topright = tm.clusterer[TOPICS].proj_transformer.transform(
    lim_lng_max, lim_lat_max)
bottomleft = tm.clusterer[TOPICS].proj_transformer.transform(
    lim_lng_min, lim_lat_min)

# get separate min/max for x/y
right_bound = topright[0]
left_bound = bottomleft[0]
top_bound = topright[1]
bottom_bound = bottomleft[1]

Create Sample Mesh

Create a grid of points at which to predict.

In [38]:
xx, yy = np.mgrid[
In [39]:
xy_sample = np.vstack([yy.ravel(), xx.ravel()]).T

Generate Training data for Kernel Density.

  • The largest effect on the final result comes from the chosen bandwidth for KDE - smaller bandwidth mean higher resultion,
  • but may not be suitable for the given density of data (e.g. results with low validity).
  • Higher bandwidth will produce a smoother raster result, which may be too inaccurate for interpretation.
In [40]:
kde_list = list()
for xy_train in xy_train_list:
    kde = KernelDensity(
        kernel='gaussian', bandwidth=200, algorithm='ball_tree')

score_samples() returns the log-likelihood of the samples

In [41]:
z_list = list()
for kde in kde_list:
    z_scores = kde.score_samples(xy_sample)
    z = np.exp(z_scores)
CPU times: user 6.68 s, sys: 2.16 ms, total: 6.68 s
Wall time: 6.66 s

Remove values below zero,
these are locations where selected topic is underrepresented, given the global KDE mean

In [42]:
for ix in range(0, 2):
    z_list[ix] = z_list[ix].clip(min=0)

Normalize z-scores to 1 to 1000 range for comparison

In [43]:
for idx, z in enumerate(z_list):
    z_list[idx] = np.interp(
        z, (z.min(), z.max()), (1, 1000))

Subtact global zscores from local z-scores (our topic selection)


  • weight of global z-score
  • lower means less normalization effect, higher means stronger normalization effect
  • range: 0 to 1
In [44]:
norm_val = 0.5
z_orig = z_list[0]
z_is = z_list[0] - (z_list[1]*norm_val)
z_results = [z_orig, z_is]

Reshape results to fit grid mesh extent

In [45]:
for idx, z_result in enumerate(z_results):
    z_results[idx] = z_results[idx].reshape(xx.shape)

Plot original and normalized meshes

In [46]:
from matplotlib.ticker import FuncFormatter

def y_formatter(y_value, __):
    """Format UTM to decimal degrees y-labels for improved legibility"""
    xy_value = tm.clusterer[
            left_bound, y_value)
    return f'{xy_value[1]:3.2f}'

def x_formatter(x_value, __):
    """Format UTM to decimal degrees x-labels for improved legibility"""
    xy_value = tm.clusterer[
            x_value, bottom_bound)
    return f'{xy_value[0]:3.1f}'
In [47]:
# a figure with a 1x2 grid of Axes
fig, ax_lst = plt.subplots(1, 2, figsize=(10, 3))

for idx, zz in enumerate(z_results):
    axis = ax_lst[idx]
    # Change the fontsize of minor ticks label 
    axis.tick_params(axis='both', which='major', labelsize=7)
    axis.tick_params(axis='both', which='minor', labelsize=7)
    # set plotting bounds
                [left_bound, right_bound])
                [bottom_bound, top_bound])
    # plot contours of the density
    levels = np.linspace(zz.min(), zz.max(), 15)
    # Create Contours,
    # save to CF-variable so we can later reference
    # it in colorbar hook
    CF = axis.contourf(
        xx, yy, zz, levels=levels, cmap='viridis')
    # titles for plots
    if idx == 0:
        title = f'Original KDE for topic {topic}\n(Flickr Only)'
        title = f'Normalized KDE for topic {topic}\n(Flickr Only)'
    axis.set_title(title, fontsize=11)
    axis.set_xlabel('lng', fontsize=10)
    axis.set_ylabel('lat', fontsize=10)
    # plot points on top
        xy_list[1][0], xy_list[1][1], s=1, facecolor='white', alpha=0.05)
        xy_list[0][0], xy_list[0][1], s=1, facecolor='white', alpha=0.1)
    # replace x, y coords with decimal degrees text (instead of UTM dist)
    if idx > 0:
    # Make a colorbar for the ContourSet returned by the contourf call.
    cbar = fig.colorbar(CF,fraction=0.046, pad=0.04)
    cbar.set_title = ""
    cbar.sup_title = ""
        "Number of \n User Post Locations (UPL)", fontsize=11)'')'both', labelsize=7), length=4)