Skip to content

Task Review Cheat Sheet

March 2, 2015

Maybe a short summary of task checks is more helpful than all of this previous conceptual rigor:

  • Is it clear?
  • Right granularity? Can you subdivide it?
  • Coherent action on valued projects?
  • Yours or someone else’s idea?
  • Outcome/Impact? Who realizes benefits and risks?
  • What kind of important?
  • What kind of urgent?
  • Serves which realm of value?
  • Affinity or fear?
  • Energy balance?
  • Delegate or outsource?
  • Synchronization points and deadlines?
  • Where is 80%?
  • Test for outcomes? Scheduled review?

Why do you need lists?

February 28, 2015

A few reasons to maintain lists

Re-allocate attention — you need lists so you don’t either forget things important to you or spend all of your attention trying to  remember things that are out of context.

Order for dependency — some things naturally proceed others for contextual reasons. Put the decorations on the Christmas tree after the lights.

Prioritization — Prioritize according to your values, context and energy management needs.

Granularity — you need to work at the right scale to fit tasks into available openings in your day, maintain focus and sustain energy while minimizing switching costs.

Simplification — Some tasks or projects are complex and have many steps. Break them down and find simpler chunks of work to understand and attack.

Clarity — focus on one, well-defined thing at a time. Write about the more complex or important tasks to learn and think more clearly.

Synchronization — you may need the attention and energy of others, so you might need to plan for overlapping context and clear communication.

Coherent Action — you may need many iterations or successive applications of energy and attention to accomplish the objective. A good plan will make coherent action over time possible.

Iterate to Milestones — Make lists to keep yourself honest about getting to the next deliverable or check point directly, cut out the “fixing to do” and “might need this later” stuff and go directly for things that providing results and learning.

Task List Checks

Here are a few tools for thinking about your tasks and getting the level, focus and balance right:

A. On the scale of days, weeks, or months (hours too short; years too long), are energy flows energy in balance? Are you feeding and being fed equally? Is the vitality of your body matching the requirements for working, learning, and relationship building? If not, re-balance immediately. (How can you tell? boredom, helplessness, listless, anger, depression, …)

B. Does the task serve the value realm you have assigned it to? What is the payoff? Who suffers and how, if the task is un-done?

C. What type of important is the task? You don’t necessarily drop anything because of the kind of important it is, but you might find you want to re-arrange some attention blocks to serve the less fear-driven types of important. Aim for some I5 and I6 every day.

I1 Co-dependent/Manipulation
I2 Demand/Threat
I3 Commitment/Practical output
I4 Core value/moral
I5 Aspiration/Desire
I6 Mission/Integrity

D. What kind of urgent?

U1 Negative consequences
U2 Disappointing others
U3 Less value due to changing context (studying for the test after taking it)
U4 Builds lasting value

(Okay, if you have to use Covey’s grid, at least make 24 boxes and color the higher numbers green and the lower numbers red. Then rate your tasks carefully and put them in the proper box. You will be more clear about what values you are serving, but that still won’t help you translate values rationally. So just throw your new, amazing grid away along with your first one.)

E. Granularity. You have a approximately power-law distribution of size of your available time slots every day (that’s another blog post). You have many hours for sleep, a few for eating, meetings, commuting, a few moments for conversations, drinking water, etc. Make tasks every day to fit all of these different sized time chunks. Remember, you are allocating attention, so get the sequencing and context right.

F. Can you delegate? Can you ask or pay someone to pay attention to something so you don’t have to?  This is what you do any time you buy or rent something, so pay attention every time you get the do it myself feeling.  These can be great and valuable experiences, but don’t do it out of habit.

G. Plan to pausing short of the idealistic goal and letting some time pass to assess where you are. (80/20 rule)

H. Test to see if a task pays back in the way you anticipated. Evaluate energy flows and outcomes of tasks a few weeks after they are complete to make sure you are doing what you think you are doing.

Why prioritization is harder than you think and a little help

February 26, 2015

The reason it is hard (and why Covey’s world is ridiculously simplistic) is that the rewards and risks for doing or delaying a thing are impossible to compare by objective rational means. We are always choosing where to put our attention based on fears, desires, affinities, delights, agreements habits and discipline. each of us has one stream of consciousness, but various activities reliably fall into different realms of value. There is no rational equation to determine if playing Legos with your kids is more or less important or urgent than creating a new TPS report cover, these are incomparable realms of value.  People who insist they are essentially comparable are pretending and want you to come along.

The trick of prioritization is an essentially personal trick. If you are going to compare the value of filling your car with gas and the desire to express gratitude to your mother by showing up on time to brunch on Mother’s day, you need to build a personal bridge between the two realms of value. We do this every day. We sort out the perceived risks and fears along with anticipated results and make choices based on a personal sense of relative value.  But it isn’t that these choices are inherently rational and the same between people or even between your self today and tomorrow.

People will tell you how you should do things to meet their expectations. If you listen too much, you might start to feel helpless–you might lose the realization that you chose where your attention goes. If you listen too little, people might think you are odd or antisocial. But all of the middle ground is yours.

Given this, is there any help for prioritizing attention?

  1. Use the GTD idea of realms of action to split tasks by physical context. This seems to make sense in that you don’t write email during your commute and you don’t rake the lawn at work. This is practical and good to think of tasks in terms of where they can best be accomplished based on context, tools, etc.
  2. Get clear on whether a task is driven straight from your own id, ego, love, loyalty, commitment and intellectual or relationship affinities, or if you are doing it to fulfill expectations, in exchange for something. Exchanges are okay, but sort them out. Is the exchange direct or indirect? (co-dependent much?) Is the exchange equitable according to your personal system of comparing realms of value.
  3. Get honest about what you can control and what you can’t. Stop worrying about what you can’t control.
  4. Design for the likeliest outcomes, even when you feel you have to work for something less likely. You won’t beat the odds every time.
  5. Review your predictions, actions and outcomes and change your perspective, attention and habits where things don’t line up.
  6. Understand your value system and feelings about how things go.
  7. Lists and probably multiple lists (so use an automated system). More on this next post.

Manage Attention, Not Time

February 23, 2015

Steven Covey says prioritization is easy.

In his world, we have demands on your time. These fall into two groups: urgent tasks and important tasks. You have to decide what to do next, so you split things up along these dimensions. Next, you plan in order to make time for important and non-urgent things, you do the urgent and important things now and you simply never do the non-urgent, non-import things.

Simple and clear. But this simple model is simplistic. We rarely fill our to-do lists with unimportant, non-urgent tasks. Further, and more interesting, demands on our time don’t usually fall neatly along a single dimension of value.

Let’s back up a minute and see if we can identify one true thing to start us off. Maybe try this: it does seem true that you can do only one thing next. Don’t counter with multitasking–that’s only a statement about how long you will spend on the next thing. So can we be more careful to define the limited resource?

You might still be tempted to answer time, and it is true you have only one day’s worth of time in one day. But that hardly seems useful, or even relevant since it is inflexible and everyone has exactly the same time. In this real sense time is bounded and contextually relevant–but it is not really a resource. Contrary to idiomatic usage you don’t “find time” or “make time” or “lose time.” It can’t be bought or traded or bargained for. Time happens no matter what. And it happens to everyone equally, so you can’t gain advantages or fall behind by managing time.

However, you can direct your attention. And I propose, this is what you actually manage, this is what you plan to use when you create your schedule and prioritize tasks. This is what you lose when you waste time. You don’t manage time except abstractly, a layer or so away. You can only put your attention on something, or not.

When you feel like you get ahead by “making time” or “work efficiently” you are talking about feeling good about how you used your attention. You aren’t saying anything about time.

Second, you can choose your energetic investment. You might direct all of your resources to one thing, but spend only a limited amount on another.

I mean this broadly in terms of your physical activity, your talents, your money and even the attention of your friends. You may choose to direct a high flow of energy toward a very important task such as finding a lost child or a promotion at work. You may take a more leisurely attitude when cleaning the garage on Sunday afternoon.

Energy flow is a bit more subtle than attention because it flows in as well as out. Some activities are credits and some are debits. A 30 minute run 3 days a week can allow you to put more energy into concentration tasks, might help you sleep better, or keep your heart healthy, keeping your attention off hospitals and hypertension medications. On the other hand, an argument with your spouse might divert your attention or lower your energy for other projects for hours. Managing energy is a trick of flows–some activities energize while others drain. You have to figure this out for yourself.

What you manage when you have good time-management skills is your attention and energy.

One note about external demands here–no one can make you focus your attention or apply your energy to anything. We regularly chose to submit our attention and energy to others. Sometime we feel like we have no choice. We feel a lot of pressure to satisfy the expectations of spouses, children, bosses, but this is always a choice. So be careful when you are making bad exchanges–they are always your responsibility.

Manage time, manage attention, so what, you still have to prioritize. What have we learned about why prioritization is harder than we assume? That’s for the next post.

John Von Neumann

January 2, 2013

By chance, I have run across multiple references to John Von Neumann material over the last few of weeks.   Von Neumann’s was an astoundingly broad and vigorous intellect and I have been intrigued by his life, creativity and contributions since first hearing about him. He is not one of the most famous 20th century scientist, though he shows up in close proximity to nearly every major character and contribution you have heard about–computability, game theory, economics, quantum mechanics, the Manhattan project…  Amazing!

There is a 45-year old documentary on YouTube that is fascinating for a number of reasons and gives a good overview of a few of Von Neumann’s contributions.

And don’t miss part 2 in which Paul Halmos says Johnny could have made a contribution if he had only applied himself…

Download (and read!) Von Neumann’s and Morgenstern’s classic work on game theory: Theory of Games and Economic Behavior.

Hat tips: Interesting post from Carson at Science Clearing House, MathJesus’ tweet of math history link on Von Neumann’s birthday.


Age visualization

November 11, 2012

At Visualized in NYC last week, one of the presenters (Sha Hwang) showed a visualization of his age.  I found this striking as a measure of one’s place in life and a lovely graphic as well. I decided to create a similar graphic for myself.

Age in months. The green line is the medial life expectancy of 78 years.

My Processing code is available on Github in case you want to make your own.


R, e.g.: Year-over-year comparisons with ggplot and facet_wrap

October 31, 2012

The following image appeared on the Gnip blog last week. It compares tweet containing “SXSW” since 2007. For comparing timing over the years it is useful to align the plots by year, but let the y-scales float. Otherwise it isn’t possible to see the features in the early years due to the growth of Twitter.

The tricks here are pretty straight forward,

  • Set scales=”free_y” in facet_wrap
  • Create a dummy year, doy (the one below is for 2000, a year not shown)
  • Label each year with a dummy scale using scale_x_date with custom breaks
  • Give metric abbreviated y-labels (see format_si function)
  • Add space between the plots as the date labels can be misleading when the plots have standard spacing (use panel.margin)

While the x-labels are correct, I don’t really like the look of how Jan 1 of the next year in each plot is hanging off to the right unlabeled.

<pre>#!/usr/bin/env Rscript

args <- commandArgs(trailingOnly = TRUE)

format_si <- function(...) {
  # Format a vector of numeric values according
  # to the International System of Units.
  # Based on code by Ben Tupper
  # Args:
  #   ...: Args passed to format()
  # Returns:
  #   A function to format a vector of strings using
  #   SI prefix notation
  # Usage:
  #   scale_y_continuous(labels=format_si()) +
  function(x) {
    limits <- c(1e-24, 1e-21, 1e-18, 1e-15, 1e-12,
                1e-9,  1e-6,  1e-3,  1e0,   1e3,
                1e6,   1e9,   1e12,  1e15,  1e18,
                1e21,  1e24)
    prefix <- c("y",   "z",   "a",   "f",   "p",
                "n",   "µ",   "m",   " ",   "k",
                "M",   "G",   "T",   "P",   "E",
                "Z",   "Y")

    # Vector with array indices according to position in intervals
    i <- findInterval(abs(x), limits)

    # Set prefix to " " for very small values < 1e-24
    i <- ifelse(i==0, which(limits == 1e0), i)

    paste(format(round(x/limits[i], 1),
                 trim=TRUE, scientific=FALSE, ...),

Y = read.delim(args[1], sep=",", header=TRUE)
Y$date <- as.POSIXct(Y$time)

png(filename = paste(sep="", args[1], ".png"), width = 550, height = 300, units = 'px')
    ggplot(data=Y) +
	geom_line(aes(date, count), color="#e56d25") +
    scale_y_continuous(labels=format_si()) +
    scale_x_datetime(limits=c(as.POSIXct("2007-01-01"), as.POSIXct("2012-09-01"))) +
    xlab("Date") +
    ylab("Tweets per Day") +
    ggtitle(args[2]) +
    opts(legend.position = 'none',
       panel.background = theme_rect(fill = "#545454"),
       panel.grid.major = theme_line(colour = "#757575"),
       panel.grid.minor = theme_line(colour = "#757575")

# year over year comparison with facet wrap
# simulate dates in single year (2000 in this case),
# but give them yr factors for facet
# use custom formatting

Y$Yr <- as.factor(as.POSIXlt(Y$time)$year + 1900)
Y$Mn <- as.factor(1 + as.POSIXlt(Y$time)$mon)
Y$Dy <- as.factor(as.POSIXlt(Y$time)$mday)
# use dates for easier plotting
Y <- transform(Y, doy = as.Date(paste("2000", Y$Mn, Y$Dy, sep="/")))

png(filename = paste(sep="", args[1], ".year.png"), width = 550, height = 800, units = 'px')
    ggplot(data=Y) +
	geom_line(aes(doy, count), color="#e56d25") +
    facet_wrap( ~ Yr, ncol = 1, scales="free_y" ) +
    scale_y_continuous(labels=format_si()) +
    scale_x_date(labels=date_format("%b"), breaks = seq(min(Y$doy),max(Y$doy),"month")) +
    xlab("Date") + ylab("Tweets per Day") +
    labs( title = args[2] ) +
    theme( legend.position = 'none',
           panel.margin = unit(1.5, 'line'),
           strip.text.x = element_text(size=12, face="bold"),
           panel.background = element_rect(fill = "#545454"),
           panel.grid.major = element_line(colour = "#757575"),
           panel.grid.minor = element_line(colour = "#757575")

Decisions: data, bias and blame

October 28, 2012

This Strata (NY, 2012) talk caught my attention more than any talk at the conference. Ms. Ravich made a request for developers to create better decision tools. (Did she confuse this group for a mythical Software Engineer/Game Theory conference?)

Ms. Ravich started with “I am not a big fan of the information revolution.” That’s a gutsy start given the crowd. But fortunately we were all drowsy, no one reacted. Technically, she was one of the best speakers–she spoke clearly and slowly, her argument was logically organized, she told a good story, and used a powerful myth as a supporting metaphor for her point.

The form of the request was shaped by the idea of fast and slow thinking. Fast thinking at its best synthesizes and sorts quickly. You need fast thinking to sort out what to think slowly about. Then she delivered a couple of assertions. “I think strategic decision makers are in real danger of the information revolution swamping our ability to do fast thinking. And that’s the very attribute we need to do to make the hard policy choices.”

What does “information revolution” mean? Apparently it is a movement or -ism or evolution or situation that can change basic human psychology and erodes the ability to do fast thinking. And what is the case for more fast thinking in policy making? Heuristics for decision making are so natural we barely realize we are using them. They are great because they are fast and we feel certain about them. Also, they can be create huge liabilities when used to make decisions about long-term policy. That feeling of certainty is associated with confirmation bias, attention bias, willful framing naivete, unconscious anchoring biases, …

Ravich goes on to explain the assertions above with an example from the Bush (43) administration dealing with the challenges of nation building in Afghanistan. Afghan was growing a lot (most) of World’s opium poppies. I am sure this caused many economic, border, organized crime, monetary, etc problems. But Ms. Ravich’s explanation for why this was bad was that it offended our national pride. So, we decided to destroy the poppies. This did not endear us with the farmers nor stop them from growing poppies.

Ravich explains that the poor process of making the decision was due to the inability of decision makers to “rack and stack the importance of each bit of information to see how it aligned with our goal.”

Following this explanation was the request: “If strategic decision makers in the situation room are going to win the information revolution, developers need a better insight into the thought process of how the policy decision makers reason and think, how we assemble and prioritize information.”

I am afraid I heard something a little like this… Look, we are good at making gut decisions. We can make them fast. We feel and act confidently about them. But you guys didn’t make the proper context for our heuristics and biases so they didn’t reflect reality. Do better next time.

On one hand, fair enough. That’s the job I signed up for. But it also seems there is room here for more responsible accounting for biases on the part of the decision makers? And that sometimes means wading through boring data and trying to understand something you don’t already understand.


Python JSON or C++ JSON Parsing

October 27, 2012
tags: , ,

At Gnip, we parse about half a billion JSON activities from our firehoses of social media every day. Until recently, I believed that the time I would save parsing social activities with C++ command line tool would more than justify additional time it takes to develop in C++. This turns out to be wrong.

Comparing the native JSON parser in Python2.7 and the UltraJSON parser to a C++ implementation linked to jsoncpp indicates that UltraJSON is by far the best choice, achieveing about twice the parsing rate of C++ for Gnip’s normalized JSON Activity Stream format. UltraJSON parsed Twitter activities at near 20MB/second.


Plot of elapsed time to parse increasingly large JSON files.  (Lower numbers are better.)

Additional details, scripts, data and code is available on github.

Dp-means: Optimizing to get the number of clusters

July 19, 2012

In my last post I compared dp-means and k-means error functions and run times.  John Myles White pointed to some opportunities that come from \lambda being a continuous variable.

Evolving the test code I posted on github, I developed a quick-and-dirty proof of concept.

First, below is the parameter vs. error graph in its latest incarnation.  There are two important changes from the analogous graph from last post:

  • Instead of using the k-means cost function to make the timing, error comparisons as I did before, I am now plotting the traditional k-means cost function for k-means and the cost function for dp-means,

\text{Cost(K-means)} + \lambda k

  • I am not plotting against \text{data range}/\lambda for comparison
  • I am plotting errors for a data set not used in training (called cross-validation in the code).

The cost function for dp-means shows a clear minimum. This graph is slightly confusing because the parameter for k-means, k, the number of clusters, increases left-to-right, while the number of clusters in dp-means goes down with increasing parameter \lambda.

I wrote a small script that leverages SciPy to optimize the dp-means cost function in order to determine the optimal value of \lambda, and therefore the number of clusters.

Here is an example on one of the data sets included as an example “input” directory.  This code runs slowly, but converges to a minimum at,

lambda: 5.488
with error: 14.2624

Here is a sample training at the optimal value with only the data as input (the code determines everything needed from the data.)

Figure shows training iterations for centers, training
data membership, cross-validation data membership.

The code is rough and inefficient, but the method seems robust enough to proceed to work on smoothing things out and run more tests. Neat.

%d bloggers like this: