Part 2: Privacy-aware data structure - Introduction to HyperLogLog
Workshop: Social Media, Data Analysis, & Cartograpy, WS 2024/25
Alexander Dunkel
Leibniz Institute of Ecological Urban and Regional Development,
Transformative Capacities & Research Data Centre & Technische Universität Dresden,
Institute of Cartography
This is the second notebook in a series of four notebooks:
- Introduction to Social Media data, jupyter and python spatial visualizations
- Introduction to privacy issues with Social Media data and possible solutions for cartographers
- Specific visualization techniques example: TagMaps clustering
- Specific data analysis: Topic Classification
Open these notebooks through the file explorer on the left side.
- For this notebook, please make sure that
02_hll_env
is shown on the top-right corner. If not, click & select.
Link the environment for this notebook, if not already done.
Use this command in a notebook cell:
!/projects/p_lv_mobicart_2324/hll_env/bin/python \
-m ipykernel install \
--user \
--name hll_env \
--display-name="02_hll_env"
Steep learning curve ahead
- Some of the code used in this notebook is more advanced, compared to the first notebook
- We do not expect that you read / understand every step fully
- Rather, we think it is critical to introduce a real-world analytics workflow, covering current challenges and opportunities in cartographic data science
Introduction: Privacy & Social Media¶
- HyperLogLog is used for estimation of the number of distinct items in a
set
(this is called cardinality estimation) - By providing only aproximate counts (with 3 to 5% inaccuracy), the overall data footprint and computing costs can be reduced significantly, providing benefits for both privacy and performance
- A set with 1 Billion elements takes up only 1.5 kilobytes of memory
- HyperLogLog Sets offer similar functionality as regular sets, such as:
- lossless union
- intersection
- exclusion
Background about HLL Research
There exist many possible solutions to this problem. One approach is data minimization. In a paper, we have specifically looked at options to prevent collection of original data at all, in the context of spatial data, using a data abstraction format called HyperLogLog.
Dunkel, A., Löchner, M., & Burghardt, D. (2020).
Privacy-aware visualization of volunteered geographic information (VGI) to analyze spatial activity:
A benchmark implementation. ISPRS International Journal of Geo-Information. DOI / PDF
Basics¶
Python-hll
- Many different HLL implementations exist
- There is a python library available
- The library is quite slow in comparison to the Postgres HLL implementation
- we're using python-hll for demonstration purposes herein
- the website lbsn.vgiscience.org contains more examples that show how to use Postgres for HLL calculation in python.
Introduction to HLL sets¶
HyperLogLog Details
- A HyperLogLog (HLL) Set is used for counting distinct elements in the set.
- For HLL to work, it is necessary to first hash items
- here, we are using MurmurHash3
- the hash function guarantees a predictably distribution of characters in the string,
- which is required for the probabilistic estimation of count of items
Lets first see the regular approach of creating a set in python
and counting the unique items in the set:
Regular set approach in python
user1 = 'foo'
user2 = 'bar'
# note the duplicate entries for user2
users = {user1, user2, user2, user2}
usercount = len(users)
print(usercount)
HLL approach
from python_hll.hll import HLL
import mmh3
user1_hash = mmh3.hash(user1)
user2_hash = mmh3.hash(user2)
hll = HLL(11, 5) # log2m=11, regwidth=5
hll.add_raw(user1_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
usercount = hll.cardinality()
print(usercount)
log2m=11, regwidth=5 ?
These values define some of the characteristics of the HLL set, which affect (e.g.) how accurate the HLL set will be. A default register width of 5 (regwidth = 5), with a log2m of 11 allows adding a maximum number of\begin{align}1.6x10^{12}= 1600000000000\end{align}
items to a single set (with a margin of cardinality error of ±2.30%)
HLL has two modes of operations that increase accuracy for small sets
- Explicit
- and Sparse
Turn off explicit mode
Because Explicit mode stores Hashes fully, it cannot provide any benefits for privacy, which is why it should be disabled.
Repeat the process above with explicit mode turned off:
hll = HLL(11, 5, 0, 1) # log2m=11, regwidth=5, explicit=off, sparse=auto)
hll.add_raw(user1_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
usercount = hll.cardinality()
print(usercount)
Union of two sets
At any point, we can update a hll set with new items
(which is why HLL works well in streaming contexts):
user3 = 'baz'
user3_hash = mmh3.hash(user3)
hll.add_raw(user3_hash)
usercount = hll.cardinality()
print(usercount)
.. but separate HLL sets may be created independently,
to be only merged finally for cardinality estimation:
hll_params = (11, 5, 0, 1)
hll1 = HLL(*hll_params)
hll2 = HLL(*hll_params)
hll3 = HLL(*hll_params)
hll1.add_raw(mmh3.hash('foo'))
hll2.add_raw(mmh3.hash('bar'))
hll3.add_raw(mmh3.hash('baz'))
hll1.union(hll2) # modifies hll1 to contain the union
hll1.union(hll3)
usercount = hll1.cardinality()
print(usercount)
Parallelized computation
- The lossless union of HLL sets allows parallelized computation
- The inability to parallelize computation is one of the main limitations of regular sets, and it is typically referred to with the Count-Distinct Problem
Counting Examples: 2-Components¶
Typically, this will result in a 2-component setup with
- the first component as a reference for the count context, e.g.:
- coordinates, areas etc. (lat, lng)
- terms
- dates or times
- groups/origins (e.g. different social networks)
- the second component as the HLL set, for counting different metrics, e.g.
- Post Count (PC)
- User Count (UC)
- User Days (PUC)
Further information
- The above 'convention' for privacy-aware visual analytics has been published in the paper referenced at the beginning of the notebook
- for demonstration purposes, different examples of this 2-component structure are implemented in a Postgres database
- more complex examples, such as composite metrics, allow for a large variety of visualizations
- Adapting existing visualization techniques to the privacy-aware structure requires effort, most but not all techniques are compatible
YFCC100M Example: Monitoring of Worldwide User Days¶
A User Day refers to a common metric used in visual analytics.
Each user is counted once per day.
This is commonly done by concatentation of a unique user identifier and the unique day of activity, e.g.:
userdays_set = set()
userday_sample = "96117893@N05" + "2012-04-14"
userdays_set.add(userday_sample)
print(len(userdays_set))
> 1
We have create an example processing pipeline for counting user days world wide, using the Flickr YFCC100M dataset, which contains about 50 Million georeferenced photos uploaded by Flickr users with a Creative Commons License.
The full processing pipeline can be viewed in a separate collection of notebooks.
In the following, we will use the HLL data to replicate these visuals.
We'll use python methods stored and loaded from modules.
Data collection granularity¶
There's a difference between collecting and visualizing data.
During data collection, information can be stored with a higher
information granularity, to allow some flexibility for
tuning visualizations.
In the YFCC100M Example, we "collect" data at a GeoHash granularity of 5
(about 3 km "snapping distance" for coordinates).
During data visualization, these coordinates and HLL sets are aggregated
further to a worldwide grid of 100x100 km bins.
Have a look at the data structure at data collection time.
from pathlib import Path
OUTPUT = Path.cwd() / "out"
OUTPUT.mkdir(exist_ok=True)
TMP = Path.cwd() / "tmp"
TMP.mkdir(exist_ok=True)
%load_ext autoreload
%autoreload 2
import sys
module_path = str(Path.cwd().parents[0] / "py")
if module_path not in sys.path:
sys.path.append(module_path)
from modules import tools
Load the full benchmark dataset.
filename = "yfcc_latlng.csv"
yfcc_input_csv_path = TMP / filename
if not yfcc_input_csv_path.exists():
sample_url = tools.get_sample_url()
yfcc_csv_url = f'{sample_url}/download?path=%2F&files={filename}'
tools.get_stream_file(url=yfcc_csv_url, path=yfcc_input_csv_path)
Load csv data to pandas dataframe.
%%time
import pandas as pd
dtypes = {'latitude': float, 'longitude': float}
df = pd.read_csv(
yfcc_input_csv_path, dtype=dtypes, encoding='utf-8')
print(len(df))
The dataset contains a total number of 451,949 distinct coordinates,
at a GeoHash precision of 5 (~2500 Meters snapping distance.)
df.head()
Calculate a single HLL cardinality (first row):
sample_hll_set = df.loc[0, "date_hll"]
from python_hll.util import NumberUtil
hex_string = sample_hll_set[2:]
print(sample_hll_set[2:])
hll = HLL.from_bytes(NumberUtil.from_hex(hex_string, 0, len(hex_string)))
hll.cardinality()
The two components of the structure are highlighted below.
tools.display_header_stats(
df.head(),
base_cols=["latitude", "longitude"],
metric_cols=["date_hll"])
The color refers to the two components:
Compare RAW data
- Unlike RAW data, the user-id and the distinct dates are stored in the HLL sets above (date_hll)
- When using RAW data, storing the user-id and date, to count userdays, would also mean
that each user can be tracked across different locations and times - HLL allows to prevent such misuse of data.
Data visualization granularity¶
- there're many ways to visualize data
- typically, visualizations will present
information at a information granularity
that is suited for the specific application
context - To aggregate information from HLL data,
individual HLL sets need to be merged
(a union operation) - For the YFCC100M Example, the process
to union HLL sets is shown here - We're going to load and visualize this
aggregate data below
from modules import yfcc
filename = "yfcc_all_est_benchmark.csv"
yfcc_benchmark_csv_path = TMP / filename
if not yfcc_benchmark_csv_path.exists():
yfcc_csv_url = f'{sample_url}/download?path=%2F&files={filename}'
tools.get_stream_file(
url=yfcc_csv_url, path=yfcc_benchmark_csv_path)
grid = yfcc.grid_agg_fromcsv(
yfcc_benchmark_csv_path,
columns=["xbin", "ybin", "userdays_hll"])
grid[grid["userdays_hll"].notna()].head()
tools.display_header_stats(
grid[grid["userdays_hll"].notna()].head(),
base_cols=["geometry"],
metric_cols=["userdays_hll"])
Description of columns
- geometry: A WKT-Polygon for the area (100x100km bin)
- userdays_hll: The HLL set, containing all userdays measured for the respective area
- xbin/ybin: The DataFrame (multi-) index, each 100x100km bin has a unique x and y number.
Calculate the cardinality for all bins and store in extra column:
def hll_from_byte(hll_set: str):
"""Return HLL set from binary representation"""
hex_string = hll_set[2:]
return HLL.from_bytes(
NumberUtil.from_hex(
hex_string, 0, len(hex_string)))
def cardinality_from_hll(hll_set, total, ix=[0]):
"""Turn binary hll into HLL set and return cardinality"""
ix[0] += 1
loaded = ix[0]
hll = hll_from_byte(hll_set)
if (loaded % 100 == 0) or (total == loaded):
tools.stream_progress_basic(
total, loaded)
return hll.cardinality() - 1
Progress reporting in Jupyter
tools.stream_progress_basic()
: For long running processes, progress should be reported.- Have a look at the function above, defined in
/py/modules/tools.py
ix=[0]
? Defines a muteable kwarg, which gets allocated once, for the function, and is then used it to keep track of the progressloaded % 100 == 0
? The % is the modulo operator, which is used to limit update ferquency to every 100th step (where the modulo evaluates to 0)
Calculate cardinality for all bins.
%%time
grid_cached = Path(TMP / "grid.pkl")
if grid_cached.exists():
grid = pd.read_pickle(grid_cached)
else:
mask = grid["userdays_hll"].notna()
grid["userdays_est"] = 0
total = len(grid[mask].index)
grid.loc[mask, 'userdays_est'] = grid[mask].apply(
lambda x: cardinality_from_hll(
x["userdays_hll"], total),
axis=1)
RuntimeWarning?
- python-hll library is in a very early stage of development
- it is not fully compatible with the citus hll implementation in postgres
- The shown RuntimeWarning (Overflow) is one of the issues that need to be resolved in the future
- If you run this notebook locally, it is recommended to use pg-hll-empty for any hll calculations, as is shown (e.g.) in the original YFCC100M notebooks.
grid[mask].apply()
?
- This is another example of boolean masking with pandas
grid["userdays_hll"].notna()
creates a list (apd.Series
) of True/False valuesgrid.loc[mask, 'userdays_est']
uses the index of the mask, to select indexes, and the column 'userdays_est', to assign values
From now on, disable warnings:
import warnings
warnings.filterwarnings('ignore')
Write a pickle
of the dataframe, to cache for repeated use:
if not grid_cached.exists():
grid.to_pickle(grid_cached)
Have a look at the cardinality below.
grid[grid["userdays_hll"].notna()].head()
Visualize the grid, using prepared methods¶
Temporary fix to prevent proj-path warning:
import sys, os
os.environ["PROJ_LIB"] = str(Path(sys.executable).parents[1] / 'share' / 'proj')
Activate the bokeh holoviews extension.
from modules import grid as yfcc_grid
import holoviews as hv
hv.notebook_extension('bokeh')
.. visualize the grid, as an interactive map, shown in the notebook:
gv_layers = yfcc_grid.plot_interactive(
grid, title=f'YFCC User Days (estimated) per 100 km grid',
metric="userdays_est")
gv_layers