Book Review: "The Mindful Geek" By Michael Taft

It's interesting how complicated it can be to sit and think. In a fascinating meta-study1 researchers from University of Virginia and Harvard found that people would rather perform mundane tasks or even electrocute themselves than sit alone in silence and think. On average, participants in the study chose to self-administer a small electrical shock about 1.47 times per 15-minute interval.

Researchers are not sure why we have such an aversion to sitting in the moment with our thoughts, but I'm sure that you can come up with a laundry list of things you would rather not think about.

Meditation then might feel like taking a boat straight into the eye of the storm. After all, the essence of the practices is to sit and think. The practice however is more complicated than it appears. Even the word meditation encompasses a wide variety of techniques and schools of thought.

Meditation can conjure up images of mystical figures in robes sitting atop the Himalayas chanting magical sounding phrases with the hope of achieving enlightment or nirvana.

However, when you peel back all the mysticism, you find something different. Scientific studies have shown that a meditation habit can have some great benefits.

The Mindful Geek is Michael Taft's attempt at de-mystifying mindfulness meditation with a field-guide to the practice; a kind of invitation to scientifically minded people to join in. As he describes it, "this is a practical book, almost a manual or a handbook of mindfulness meditation".

The why

The first feature of The Mindful Geek is that it is filled to the brim with references to scientific studies. Taft continually highlights the tangible benefits of maintaining a meditation habit, quoting dozens of studies performed on long-term meditators and beginning practitioners. The goal is clear: to convince the skeptical, secular reader of the real-world benefits of meditation without resorting to mystical and spiritual arguments that might alienate the titular geek.

When a claim is made it is backed by some kind of scientific study; when he makes connections between ideas that have yet no scientific support, he is forthcoming about it. Making that distinction is important and Taft makes it fairly well through-out.

Taft also is equally clear about what meditation will not do for you. If you were hoping for super powers or to transcend into a higher plane of being (a la Stargate), I'm afraid you're out of luck.

Important: As of the writing of this article, I've not reviewed the studies in question. Take my, and anyone elses, words with a pinch of salt. As always, do your due dilligence and question what you read. Science is never simple and there's probably a mountain of caveats and details behind any scientific claim.

The how

Taft takes an engineering-like approach to introducing meditation. The practice is broken down into three individual components:

  • Concentration: the ability to train your attention on an object
  • Sensory clarity: the ability to explore the object of meditation in detail
  • Acceptance: being alright with whatever feeling or emotion arises during meditation

These components are employed together in various ratios to build what he calls the meditation algorithm, a series of steps which are repeated again and again through-out your sitting session. This repeating algorithm then becomes the basis of the five meditation techniques detailed in the book.

I loved this precise approach to the practice. Like in chemistry, the basic building blocks of meditation come together in various forms to create a large variety of different meditation techniques. The process becomes less intimidating for new practitioners by having a clear set of steps to follow.

Taft also provides, free of charge, a bunch of guided meditation audio to supplement the written instructions in the book. This are useful when starting out the practice, as sitting in complete silence can be difficult for beginners.

Sitting is hard

Taft also discusses at length the kind of obstacles one might encounter when sitting, such as how to deal with strong negative emotions, physical and emotional pain, and distractions.

He's thorough and tries to cover as much ground as possible to appeal to a range of practitioners, including those suffering from anxiety or depression. Taft gives you some of the necessary tools to decide how to approach pain points in a safe and comfortable way.

This highlights an important point: meditation is for everyone. It isn't only the realm of mentally healthy people.

Of course, you will encounter many different obstacles over years of practicing that Taft cannot hope to cover in the book. The book does a good job of getting you going.

The mindfulness part of mindfulness meditation

Taft describes the mindfulness meditation practice as an attempt to "make the unconscious conscious"; like taking a peek "under the hood" of your awareness. So while there is plenty of detail on the mechanical aspects of meditation, Taft spends an equally long part of the book on the mindfulness part of mindfulness meditation, because as he puts it "if you are meditating every day...these are the kinds of minutia that become of functional interest to you."

Taft highlights the benefits of focused deliberate attention over pure mind-wandering, using studies to support his argument. Here you'll note traces of Cal Newport's Deep Work as Taft shares Newport's dislike for our increasing addiction to the always on, always multitasking culture.

Part of the mindfulness meditation practice is about fostering a better relationship with our own emotions. Taft discusses the evolution of our emotions as a kind of unconscious guidance system that leads us away from danger and towards safety and food. This section was particularly interesting for me. I've written about this topic in the past, discussing how we sometimes misunderstand what our emotions are trying to tell us. If you're anxious it means you need to keep your eyes open, it doesn't mean you're not ready for the interview.

While these chapters are interesting it's important to remember that this is a deeply functional book and "[it's] all for the sake of doing the practice."

Conclusion and some negatives

In general I think the book is great however I have a few bits of criticism.

First, the structure of the book can be a little confusing. The first 7 chapters flow nicely from one to the next, from what is mindfulness meditation in chapters 1 and 2, your first practice in chapter 3, and the introduction of the components of meditation in chapters 4 through 7.

The chapters that follow feel more like a series of essays on mindfulness, the benefits of meditation, cognitive science, meditation in every day life, and psychology.

This structure is not necessarily bad, nor does it detract from the quality of the book but as a person who likes connecting threads, I felt a little uncomfortable. The reader is not lead towards a single conclusion, but rather the author attempts to discuss different but interconnected ideas that run in parallel.

I also think some of these chapters could have been condensed and combined. I often felt like Taft made a good case for his main points and I was ready to move on, but the chapter would go on for a few more pages. As I mentioned earlier, many of the concepts in those chapters will be familiar to readers of books such as Deep Work or even Thinking, Fast and Slow. So perhaps to those unfamiliar with those topics the repetition is beneficial.

Another small gripe is with the ebook itself. The book is available for free if you sign up to Michael Taft's mailing list, or in paperback and Kindle from Amazon. The paid-for Kindle version of the book is unfortunately badly organized with a broken table of contents which points to various endnotes and not the individual chapters. In addition, each endnote gets its own page rather than being nicely formatted. Not a major issue but for a paid-for version it's unfortunate. Note that the free ebook, which comes as a Kindle-friendly Mobi file, is perfectly well formatted and has a useful table of contents.

In the end however, Michael Taft has created a great field guide to the practice of mindfulness meditation squarely aimed at the secular/skeptical/geek crowd. Taft leverages ample scientific evidence to make his case, the techniques are clearly described making it easy to get going, and the numerous discussions of mindfulness habits are crucial in helping the reader take their habit to the next level.

Did I mention that he uses Star Trek, Star Wars and Dune references through-out the book? And yes, he does quote Yoda.

You can get the book on paperback and Amazon Kindle or for free by signing up for Michael's mailing list.

Michael Taft has given a talk at Google and he's been interviewed on the You Are Not So Smart podcast on the same topic.

Have a good one. Until next time

-- Jay Blanco

  1. A study of studies.

How meditations have made my showers great

According to my habit-tracking app today marks the 45th consecutive day of meditation for me. Instead of explaining why I'm meditating I'm going to talk about showers.

I see meditation, in part, as a kind of practice in sensory input management. What the hell does that mean?

From the moment that we wake up, every second we are experience millions of sensations every second. The feeling of the sun on every millimeter of bare skin, the sounds of cars when you walk down the road, the smell of food in a restaurant, the sight of the afternoon sunshine as it pokes through the window, aches and pains, heartbeats, breaths, and the thoughts in your own head.

It gets tricky to come up with a measure of the amount of data we absorb every second without making some tenuous assumptions, but needless to say it's a lot of information. Clearly we don't experience all these inputs with the same intensity simultaneously and constantly. Your brain may actually explode1.

Instead the brain manages those inputs, making the important ones more clear while dulling others. If I ask you to focus on the sensation of your feet inside of your shoes you'll quickly discover some new sensations you didn't know were there. Your toes might be kinda warm, while your ankles are a little colder. You might be surprised to find you were curling your toes or that there was a little bit of tension on the arch of the foot. Perhaps the sole of your shoe is slightly rougher in one spot than the rest. When you focus attention on a particular sensation it gains in intensity, clarity, and resolution.

In part, the process of meditation is about developing the ability to focus on sensations. It doesn't mean you'll develop x-ray vision or that you'll be able to throw out your prescription glasses, but what you can see, feel, hear, and taste will be a bit more nuanced, clear, and intense.

This leads me to showers.

Having an improved ability to focus on body sensations has made my showers more enjoyable. Instead of seeing them as a chore or letting my brain run wild with thoughts, I sit on the shower floor, close my eyes, and focus on the feeling of the water flowing over me. Instead of getting this dull blob of sensation all over my skin I sense detail, differences in temperature, and flowing patterns. I hear a mixture of sounds as the water hits the shower floor at different times, and the sound of the water going down the drain.

It's akin to what artists refer to as the artistic eye. The skill of actively analyzing how an object or a landscape looks. What shapes make up the object? How do these objects overlap? How does lighting create a complex pattern of soft shadows and highlights? You focus on the subject and pick out as much detail as possible.

You are noticing things that were always there but to which you never paid much attention and that has made showers way more awesome.

Until next Time. Have a great one.

-- Jay Blanco


  1. Though probably not. I didn't find any references to brains blowing up because of information overload.

Using and R to understand my music listening habits

Let's do something completely different today! I'm a huge data nerd and I've been uploading my music play information to a site called going back to 2005. Think of it like a "have read" book list for your music, except that it tracks every time you listen to a song.

I've collected quite a bit of data in the last 11 years so I decided to visualize my music habits in the last decade and see what I can learn about it.

For non-developers: If you're not interested in coding just scroll past the code snippets and look at the nice plots and the accompanying explanation.

For developers: Check out the github repository for the project1. Keep in mind that I assume you know how to code in R and are familiar with packages like dplyr, jsonlite, and ggplot2. If you're not, this will not make sense and I suggest you start elsewhere.

Let's go!

Getting the data

Unfortunately there's no easy official method to download your entire dataset of scrobbles. One way or another you need to use the API to download your scrobbles one page at a time. You can either program against the API yourself or use somebody else's code to pull the data into a CSV or another readable format.

Since I wanted to get at my data as quickly as possible, I went for the latter approach. After trying a couple of services I settled for a nice python script which you can get here.

Note: You might run into a site by Ben Benjamin which is more user-friendly, but keep in mind that it does not support UTF8 characters so it foreign characters like Korean, Japanese, and Chinese, will be messed garbled in your dataset.

The dataset

Once you've downloaded your scrobble data you'll get a CSV file with a structure that looks close to this:

Column Name Sample data
time 1327616201
song Good Song
artist Blur
album Think Tank

Each row represents a single play of a track with the timestamp of the play, the name of the song, the artist, and the album to which the track belongs.

This information is taken from and not from the metadata tags of file to which you listened. This has implications for missing album data or unrecognized songs but more on that later.

If you used the python script to which I linked, you'll have some additional ID strings like the MusicBrainz ID. Regardless of which method you chose, at a minimum you should have the data in the table above.

As for my dataset, I have a total of 29,674 scrobbles recorded between 13/02/2005 and 03/07/2016.

Loading the data

Let's go ahead and load the data using the following code in a file called load.R2:


col_names <- c(
   "id1", "id2", "id3"

raw_scrobbles <- read_tsv("data-raw/raw_scrobbles.csv", col_names = col_names)

The data I used was tab-separated so I used read_tsv, depending on your file you might want to use read_csv or any of the other read_* functions in the readr package. Look at the readr package documentation for more information.

Cleaning the data

There are two kinds of cleaning that need to happen and we'll tackle them in order:

  1. Convert data to a useful format that makes analysis easier (Easy)
  2. Fill in as much missing information as possible (Hard)

The date-time of the play is stored in the TSV file as seconds since the beginning of the epoch. I need to convert this into the more useful date-time type POSIXct to facilitate later manipulations. I also select the relevant columns, and the re-arrange by date.


tidy_scrobbles <- select(raw_scrobbles, date, song, artist, album)
tidy_scrobbles <- mutate(tidy_scrobbles, date = as.POSIXct(date, origin = "1970-01-01"))
tidy_scrobbles <- arrange(tidy_scrobbles, date)

This results in a tidy dataset that looks something like:

Source: local data frame [29,674 x 4]

                  date                    song            artist              album
                <time>                    <chr>           <chr>               <chr>
1  2005-02-13 12:20:00        Tired 'N' Lonely Roadrunner United                 NA
2  2005-02-13 12:20:01        Tired 'N' Lonely Roadrunner United                 NA
3  2005-02-13 12:20:02                  Suunta            Nicole Suljetut ajatukset
4  2005-02-13 12:20:03         Army Of The Sun Roadrunner United                 NA
5  2005-02-13 12:20:04         Army Of The Sun Roadrunner United                 NA
..                 ...                     ...               ...                ...

Well that was easy! But don't celebrate yet, because there's a clear problem. A bunch of tracks don't have album information. I'm not sure why the information is missing but it could potentially be a mismatch between the song name in the song metadata and's repository, the song is not part of an album, or the song is not in repository at all.

To figure out how much data is missing we need to first look at unique tracks and then count _NA_s in each column.

check_health <- function(data) {
  unique_tracks <- unique(select(data, -date))



Luckily no artist or song names are missing, however 563 unique tracks are missing album information or just under 8% of the tracks. These account for ~7% of the total number of scrobbles. Let's see if we can fill some of this information automatically.

Using the Last.FM API to get album information

The API allows users to search for artists/songs and get information about artists, albums, and tracks.

The following query to method track.getInfo returns a JSON data structure with track information based on the artist and song name:[ARTIST_NAME]&track=[TRACK_NAME]&api_key=[API_KEY]&format=json

The API key is obtained from the API page. Make sure to sign up for an account and checkout the documentation for the other parts of the API.

Let's write a function to build that query:


build_track_info_query <- function(artist, track, api_key, base = "") {
  base <- param_set(base, "method", "track.getInfo")
  base <- param_set(base, "artist", URLencode(artist))
  base <- param_set(base, "track", URLencode(track))
  base <- param_set(base, "api_key", api_key)
  base <- param_set(base, "format", "json")


The function takes an artist and song name, applies the URL encoding to the parameters, and builds the URL of the query3.

Next we fetch the JSON from the site and parse it for the relevant information. Here I use the jsonlite package to send the GET message and parse the response for the album name.

Since fetching JSON responses is a time consuming task, I use the package memoise to build result-caching versions of the fetching functions.

Finally, I create a new function that calls mapply to fetch the album information for a set of artists and song names.


fetch_track_album <- function(artist, track) {
  print(paste0("Fetching ", artist, " song: ", track))
  json <- fromJSON(build_track_info_query(artist, track))

  if (is.null(json$track$album)) return(NA)


memfetch_track_album <- memoise(fetch_track_album)

fetch_tracks_albums <- function(artists, tracks) {
  if (length(artists) != length(tracks)) {
    stop("Cannot fetch genres for songs because inputs are bad")

  mapply(memfetch_track_album, artist = artists, track = tracks)

Now that we have a way to fetch the album name for multiple tracks we need a function to amend the missing album names to the original dataset.

fill_missing_albums <- function(data) {
  data %>%
    filter( %>%
    distinct(song, artist, album) %>%
    mutate(album = fetch_tracks_albums(artist, song)) %>%
    left_join(data, ., by = c("artist", "song")) %>%
    transmute(date, artist, song, album = coalesce(album.x, album.y))

This worked pretty well but it didn't fill in all missing album names, nevertheless I decided to press on by saving out the clean dataset for the next phase.

Enriching data

To make things more interesting let's enrich the dataset with genre information. doesn't assign genre information to songs and artists, instead relying on user-defined tags to provide that metadata. Let's fetch the most popular tags for each of the artists in the dataset and use that as the genre for all scrobbles4.

Since the procedure for building the query, getting the response, and parsing the JSON content is similar to fetching the album name. You can note the differences by looking at the functions, for building the query:

build_artist_toptags_query <- function(artist, api_key, base = "") {
  base <- param_set(base, "method", "artist.gettoptags")
  base <- param_set(base, "artist", URLencode(artist))
  base <- param_set(base, "api_key", api_key)
  base <- param_set(base, "format", "json")


and fetching the JSON response and parsing it into a useful an R vector:

fetch_artist_toptags <- function(artist) {
  print(paste0("Fetching ", artist))
  json <- fromJSON(build_artist_toptags_query(artist))

  if (length(json$toptags$tag) == 0) return(NA)


Finally, make sure to save this tidy enriched data. We've done a lot of work so far, it would be a shame to lose it.

Let's get plotting!

To start with let's have a look at the overall number of plays per month to get a feel for the data.



# Wrapper around full_seq to work with zoo::yearmon objects
full_seq_yearmon <- function(x) {
  as.yearmon(full_seq(as.numeric(x), 1/12))

clean_scrobbles %>%
  count(yrmth = as.yearmon(date)) %>%
  complete(yrmth = full_seq_yearmon(yrmth)) %>%
  ggplot(aes(yrmth, n)) + geom_line() + geom_point() +
  scale_x_yearmon() +
  labs(x = "Month", y = "Plays", title = "Play counts per month",
       caption = "source: scrobbles") +

To have a nice looking plot we summarize the counts per month with the help of as.yearmon in the zoo package and some dplyr goodness.

The dplyr::count function is a convenience function that wraps the often used _groupby, summarize, arrange pattern into a single function call. Here we create a new column called yrmth, group by it and then summarize the data frame by it.

If we were to plot this data now, ggplot2 would assume that the dataset is complete and create a single connected line across all the data points. We need to make sure that missing data is clearly visible on the plot, otherwise our visualizations are misleading. Let's pad the dataset with NA values (or 0 if you like) for the total count in months where no scrobbles happened.

We use full_seq in our own function full_seq_yearmon to generate a full sequence of yearmon objects from 2005 to 2016, and then invoke complete to expand the yrmth column to contain this sequence.

We use the hrbrmisc package by hrbrmstr because it contains some nice custom ggplot2 themes but it's completely optional.

It seems a lot of data is missing, likely because scrobbling was not setup after a format or a reinstall of the music player.

There's also a large peak around January 2012; mid-way through my PhD. I'm not certain what caused the peak but I will take a look at this period in a follow up post.

Top 20 artists of all time

Let's look at the most popular aritsts overall.

top_artists <- clean_scrobbles %>%
  count(artist, genre, sort = T) %>%
  ungroup() %>%

First we count the number of scrobbles by artists, making sure to keep the genre column around, an then select the top 20 artists.

The next part is specific to my dataset, but I had to translate the names of Korean and Japanese bands into English since I couldn't get ggplot2 to display the characters correctly. We'll be translating these artists names again so we have a function to make that easier.

translate_artists <- function(x) {
    소녀시대`  = "Girls' Generation",

top_artists_translate <- top_artists %>%
  mutate(artist = translate_artists(artist))

And finally we create a lollipop-esque plot of the artists ranked by play counts, colored by the genre:

top_artists_translate %>%
  ggplot(aes(reorder(artist, n), n, color = genre)) +
   geom_segment(aes(xend = reorder(artist, n)), yend = 0, color = 'grey50') +
     geom_point(size = 3) +
  coord_flip() +
  labs(y = "Plays", x = "", title = "My taste is all over the place", subtitle = "Top 20 Artist by plays", caption = "source: scrobbles") +
  theme_hrbrmstr() +
  scale_color_viridis(discrete = T, name = "Genre") +
  theme(panel.grid.major.x = element_blank(),
        panel.grid.minor.x = element_blank(),
        panel.grid.major.y = element_line(color = 'grey60', linetype = 'dashed'))

The code above then produces the following ranking plot:


Turns out I'm a big fan of John Mayer and in general my taste in music is very varied.

Artists preference over time

Let's take a look at how artist preference evolves from year to year, with some inspiration from this beautiful subway-style plot that we've adapted to our needs:

That's really nice isn't it? For some reason the popularity of Pendulum has been steadily decreasing, while John Mayer has remained pretty steady.

Let's take a look at how we made that plot. First we need to count the number of plays per year per artists and select only the top 10 artists per year and then rank the results using row_number. We use row_number to avoid having multiple artists with the same rank as they will overlap in the final plot:

# Lubridate For the year function

tops <- 10

artist_ranking <- clean_scrobbles %>%
  mutate(artist = translate_artist(artist)) %>%
  count(year = year(date), artist) %>%
  group_by(year) %>%
  mutate(rank = row_number(-n)) %>%
  filter(rank < tops + 1)

To make the plot we start by building data frames that contain the names of the top artists at the either end of the plot:

artist_end_tags <- artist_ranking %>%
  ungroup() %>%
  filter(year == max(year)) %>%
  mutate(year = as.numeric(year) + 0.25)

artist_start_tags <- artist_ranking %>%
  ungroup() %>%
  filter(year == min(year)) %>%
  mutate(year = as.numeric(year) - 0.25)

These data frames will be used to define the labels on the left and right of the plot, so we increment/decrement the year to avoid overlapping with the edges of the plot.

To make the plot clearer I highlight a few artists by assigning them a color, and set the other artists to gray so they blend nicely with the background:

colors <- c("John Mayer" = "#6a40fd", "Netsky" = "#198ce7", "Chet Baker" = "#563d7c", "Jorge Drexler" = "#f1e05a",
            "Joe Satriani" = "#b07219", "Pendulum" = "#e44b23", "Antoine Dufour" = "green")

othertags <- artist_ranking %>% distinct(artist) %>% filter(!artist %in% names(colors)) %>% .$artist

colors <- c(colors, setNames(rep("gray", length(othertags)), othertags))

highlights <- filter(artist_ranking, artist %in% names(colors)[colors != "gray"])

Finally we put it all together

ggplot(data = artist_ranking, aes(year, rank, color = artist, group = artist, label = artist)) +
  geom_line(size = 1.7, alpha = 0.25) +
  geom_line(size = 2.5, data = highlights) +
  geom_point(size = 4, alpha = 0.25) +
  geom_point(size = 4, data = highlights) +
  geom_point(size = 1.75, color = "white") +
  geom_text(data = artist_start_tags, x = 2003.8, size = 4.5) +
  geom_text(data = artist_end_tags, x = 2017, size = 4.5) +
  scale_y_reverse(breaks = 1:tops) +
    breaks = seq(min(artist_ranking$year), max(artist_ranking$year)),
    limits = c(min(artist_ranking$year) - 1.5, max(artist_ranking$year) + 1.6)) +
  scale_color_manual(values = colors) +
  theme_hrbrmstr() + theme(
    legend.position = "",
    panel.grid.major.y = element_blank(),
    panel.grid.minor.y = element_blank(),
    panel.grid.minor.x = element_blank(),
    panel.grid.major.x = element_line(color = 'grey60', linetype = 'dashed')) +
  labs(x = "Year", y = "Rank", title = "Sorry, Pendulum",
       subtitle = "Top 10 artists per year by plays")

Note that we increase the limits on the X axis beyond the range of the data to make the artist labels at the start and end visible.

Top 10 songs

Next let's look at the top 10 songs, the process is similar to making the artist ranking plot. Here we need to translate the names of foreign songs as well, and then we use unite to create nice labels for the plot that include the artist and song name.

tops <- 10

translate_songs <- function(x) {
    `소녀시대 (girls' generation)` = "Girls' Generation",
    `소원을 말해봐 (genie)` = "Genie"

top_songs <– clean_scrobbles %>%
  count(song, artist, sort = T) %>%
  ungroup() %>%
  top_n(tops) %>%
    artist = translate_artists(artist),
    song = translate_songs(song)
  ) %>%
  unite(fullname, artist, song, sep = " - ", remove = F)

top_songs %>%
  ggplot(aes(reorder(fullname, n), n)) +
  geom_segment(aes(xend = reorder(fullname, n)), yend = 0, color = 'grey50') +
  geom_point(size = 3, color = viridis(1)) +
  scale_color_viridis() + coord_flip() + theme_hrbrmstr() + theme(legend.position = "") +
  labs(x = "", y = "Plays", title = "Hey there K-pop", subtitle = "Top 10 songs by plays",
       caption = "source: scrobbles")

It seems that John Mayer holds the top spot because I listed to many of his songs as smaller number of times. As a result none of his songs cracked the top 10. On the other hand it's interesting that 3 out of the top 10 songs are K-pop songs.


Genres over time

Let's make another subway-style ranking plot for the top 10 genres. The code is similar to that which created the artists ranking plot, so I omit it here.

It seems my love for all forms of metal has decreased towards the end of my high-school years, with acoustic, singer-songwriter, electronic, and jazz music picking up the mantel. Over time my taste in music has mellowed a bit.

One more thing

This was a very fun project, and I learnt a bunch about API usage and how to make more complicated visualizations with ggplot2. I'm excited to do this again in another 10 years and see how my taste has changed from today.

As a fun take away for you the reader I've created a Spotify playlist with some of my top songs of all time:

Top Songs by Jay Blanco

Until next Time. Have a great one.

-- Jay Blanco

For great R and data analytics content please checkout R-Bloggers.


  1. The code is not production quality, but feel free to adapt it.
  2. I used throw-away labels for the ID columns cause I'm not keeping them.
  3. I realize that the httr package might be a more modern approach that using urltools, but frankly I didn't do much API work before this.
  4. This is certainly not perfect, but we are aiming for good enough. Saving tags for all songs would take too long.

Why you make me sad Apple Music?

Three months ago I switched to Apple Music from Spotify after many years on the service. Last week that experiment came to an end.

I switched away from Spotify for multiple reasons:

  • Offline playback left much to be desired: - You couldn't view Album/Artist pages for local files without internet access, in spite of having tagged all local files correctly.
  • Consolidation of services - I've replaced Google calendar and contacts with iCloud equivalents and have been better off for it. All of my devices are Apple-made so living in that eco-system makes sense.
  • Local file cloud syncing - The ability to have all my songs synced to the cloud and accessible on the phone sounded like a great idea. Spotify's solution for getting local files onto your phone was more clunky.
  • Try something new - This is not a great reason, but as a nerd you probably get it.

After only three months of using Apple Music I am confident in switching back. As far I'm concerned Apple has a long way to go before Apple Music is an attractive streaming service for me. Unfortunately the thing that's holding it back is not the service itself; it's iTunes.

iTunes has become a sort of Frankenstainian amalgamation of disparate bits of functionality held together by rough stitching and a UI that feels just as shabbily put together.

iTunes is a big thorn in Apple's rear. It has taken on too many responsiblities; simultaneously too difficult to fix and far too important not to. It is the application embodiment of the God class anti-pattern. Apple's struggle with iTunes is well documented so I will try to keep my thoughts brief.

Local files or Apple Music

One of the biggest issues I had with iTunes and Apple Music was search. When you search for a song, you have to select whether you want to go to Apple Music or your local iTunes library.

When you want to listen to music users do not usually care where it comes from as long as the speakers make those nice sounds you like so much.

This adds an unnecessary step between you and a fun time with music aswell as cluttering an already overfilled UI that over time gets tiring. Like looking at a restaurant menu with too many kinds of ramen. Your eyes just glaze over.

I am also not entirely clear on what Apple is actually doing with my local files and what's being stored or not and it seems not all my music has uploaded yet, but there was no clear indication of how long that was gonna take or how far along the process is.

No easy scrobbling to Last.FM

While working on an upcoming article, I realized I'd been derelict in uploading song playback info to (scrobbling) over the last few months. When I tried to set up scrobbling on my Mac and iPhone I realized there's no easy way to do it.

Admitedly, this one's not Apple's fault but it nevertheless pushed me to go back; Spotify supports built-in scrobbling on both iOS and macOS.

It takes too much effort to add songs to playlists

The procedure to add a song once you found it goes something like this:

  1. Right-click on the song,
  2. then select "Add to Playlist.." which opens a context-menu,
  3. then select the playlist from said menu.

That's three clicks, which doesn't sound like much but compared to drag-and-drop in Spotify it feels slow and clunky, and if you have more than a handful of playlists, like I had in Spotify, using a context menu is ridiculous.

Too many things on the screen

I don't know how to put this more eloquently. There's just way too many freaking things on the screen. There's a sidebar with categories and playlists, and your playlists are in a separate heading to the public Apple Music playlists that you subscribe to, but the terminology is confusing because the songs that you get from the streaming service get added to your Apple Music library which is somehow the same but not the same as the local library of songs that you have but those get synced also to iCloud which is not the same as you Apple Music library.

Then to switch to music videos there's a drop-down menu at the top which you can edit. This is OK but it's also on the same level in the hierarchy as the recommendations and the "Connect" part of iTunes, which can be accessed via a bunch of buttons on the top. The recommendations are in separate tab which is not where your music is, and there's also still Genius mixes for some reason and......

And if at this point I sound like an idiot who doesn't understand how the application works then you're partially right. I don't know how to use iTunes. If an experienced, tech-savvy user doesn't feel comfortable doing the most basic actions in your application after three months of regular use, as a developer you have failed colossally.

I'm not having fun

The UI is so obtuse that I'm not enjoying my time listening to music and discovering new songs. I've built a total of 5 playlists in iTunes. In contrast, I have more than 30 playlists in Spotify focused on special activities like reading books or writing; and others focused on a specific genre or a tone. I have fun playing around with Spotify and discovering new artists. With Apple Music I just felt sad; that's not what you want.

No "One more thing"

Because Apple are having to deal with iTunes they can't focus on building on top of the platform. Coming back to Spotify I was delighted to find a new recommendation section on each playlist. Spotify now suggests songs you can add to your playlists based on what you've already put there. Within 5 minutes of being back on Spotify I was listening to new artists and songs. It was like a breathe of fresh air.

There are also pre-built playlists for every mood and season as well as a bunch of running playlists that adapt to your running pace using the gyroscope on the phone. I had a great run on Friday because of Spotify.

iTunes made my life harder. Spotify makes my life better. It's as simple as that.

It was thus, that I have put an end to the Apple Music experiment and returned to the place where my music and I feel at home. It's good to be back little green friend, I've missed you.

Have a good one. Until next time.

-- Jay Blanco

Brexit: A personal perspective

Edit: I've added some of my favourite photos from my time in the UK to highlight how great it is. Enjoy.

On the 23 of June the UK public voted to leave the EU, the so-called Brexit.

I've been unsure whether to write on the topic. Until now, I didn't understand why the events of last week bothered me in a way that no other political issue had ever done.

Last week the people of the UK voted to end their membership in the European Union by a narrow majority of 48.1% vs 51.9%.

Lost in the political and economic sabre-rattling that has been going on in the media are the personal stories of the that have already been affected by Brexit.

I lived in the UK for almost 9 years as a foreign student at Royal Holloway university near London 1. As a non-EU student I'm familiar with the immigration process in the UK and have experienced the effects of and increasingly restrictive immigration policy2. Interminable forms, high application costs; fingerprinting; six month-long waits in limbo without a passport; and most importantly, the feeling that the road was getting harder and harder to traverse. Each extension got more complicated and cumbersome; requirements changing from year to year. The international student team at my university do their best to keep up which changing regulations, but towards the end I got the sense it was starting to wear on them. Regardless, I went through each application because I knew that it was worth it. What I thought of as the UK was worth fighting for.

From 2006 to 2015 I was proud to call the UK my home. I've met wonderful people, some of whom became my best friends; I worked hard and got two degrees in physics; and was lucky enough to meet the person I love the most during my time at university.

I can say without a doubt that the person I am today is a direct result of my time there, surrounded by people from all cultures, with a rich history, and a general collective understanding that being nice and polite is for the benefit of all. I was privileged to work with students and staff from different corners of the globe, from Portugal to the US, Russia and Canada, and many other countries, all coming together to build on our understanding of the universe. To build a common understanding of the reality we live in, knowing that all that matters is your passion for the work you do and how your treat others. I'm better off today because I was part of that vibrant, multicultural, and accepting community.

Its because of those experiences that the results of the referendum bother me so much. Whether you are for Leave or Remain you cannot ignore the outright racist undercurrent of the Leave campaign and a large part of its voters.

The EU referendum gave a chance and a voice for the worst parts of society to flourish. The racist rhetoric spewed on television by the leaders of the Leave campaign has emboldened and legitimized those who until now shared their hateful messages over a beer with the mates or on online forums.

We are already seeing the social effects of this referendum and the Leave campaign, with a series of isolated, but nonetheless serious attacks against immigrants and their communities. What was until now a simmering undercurrent has already bubbled into outright violence and hate crimes.

It appears that a part of the Leave camp feel that multiculturalism, immigration, and globalization, have made the UK worse off. The Leave victory last week and the events that followed highlighted that the country I lived in wasn't mostly about diversity and inclusivity. What I optimistically saw as a prejudiced and backwards fringe turned out to be a portion of the population that I cannot ignore.


A large part of the population said no to everything I represented and hold as important: rationality, acceptance, diversity, and multiculturalism. They said no to what my partner and myself are as an international couple, British and Israeli. We considered getting married and moving back to the UK. Is that still an option? And more importantly do we even want to at this point? Perhaps it's too soon to tell.

To be clear not everyone in the Leave camp is a racist or bigot. Many have serious concerns about the direction their country and their community are taking. They are concerned that their way of life is changing in ways they do not like or can control. I'm not an expert on this issue so I will leave it to better minds to tackle.

There are millions of people who believe that the UK is better when citizens of different nationalities work together for the common good. People like my friends, my girlfriend's family, my university lecturers, and the thousands of other people I've had the pleasure of meeting while I was there. It's not all bad.

However, Leave did win. The campaign that depicted images of refugees as a plague and was headed by an out and out racist, won by popular vote. When that happened the picture that I held in my mind of my home for 9 year cracked just a little.

My niece asked me last week about the university entry requirements outside of Israel. I was genuinely happy at the prospect of her getting the chance to experience a country rich with history and culture, and with lovely people and places. To experience the UK that I got to experience.

But now I'm not so sure I can look her in the eye with that same pride as before and tell her she should go. And that, is just a little sad.

Until next Time. Have a great one.

-- Jay Blanco


  1. Well it's not really near London but it's close enough. Google maps is your friend.
  2. Just to be clear I lived in the UK with a Tier 4 visa which is defined entirely by the UK government and is unrelated to free movement facilitated by EU membership.