- Significantly more fluid and Mac-like scrolling
- Very Windows-y but significantly more pleasing interface than Office 2011—especially on a retina display
- Interface, button locations, etc. are more consistent with Windows version of Office
- Built-in keyboard shortcuts are now consistent with the Windows version (e.g.
F2
will finally edit a cell in Excel andF4
will finally put$
s in formulas) - A more stripped down VBA editor makes doing any kind of serious macro editing difficult to impossible. Microsoft now recommends developing all macros in the Windows version of Office, but they seem to be hinting that a more web-based solution for macro development is coming.
- No ability to create custom keyboard shortcuts of any kind (big bummer)
- No ability to customize the quick access toolbar (the improved ribbon layout makes this less of a con)
- No ability to buy the thing outright (yet)—you can only subscribe for $6.99/month through Office 365. Supposedly a one-time purchase price is coming in September 2015.
- Super talkative to Microsoft's servers. I've never seen an application try to hit so many URLs through so many ports via Little Snitch. This seems consistent with Microsoft's apparent know-everything-the-user-is-doing in Windows 10. Office 2016 is essentially a giant wants-to-be-connected web app that runs locally on your Mac.
- Speaking of giant, it really is. Office 2016 consumes over 6 GB of space in my Applications folder, wherease Office 2011 took up a little over 1 GB.
- Completion or not
- Time spent on the project
- Human error and bias
- External random events
- Projects are more likely to get done if there is time to do them
- Projects are more likely to get done if you care to do them
- Caring has a stronger effect than time
Realpolitik TV
Dr. Drang is tired of Donald Trump's bullshit. Which is essentially just an amen to Trump's preaching.
Trump isn't the traditional politician, but rather more of a singer-songwriter in his own indie sub-genre of republicanism.
I think it's very likely that Trump will be written into history as a chief catalyst in the unapologetic push to fully merge politics with entertainment.
Politicians have always been actors fundamentally. The ability to play a charismatic character on stage is an asset. Politicians with acting backgrounds have used their skills to great advantage. Ronald Reagan and Arnold Schwarzenegger are notable examples.
Nerds don't get politics because most of us are introverted and probably find acting in a non-Hollywood capacity disingenuous. Which it is. But it's the way things get done in a time when the competition for American attention has never been more intense. Entertainment works.
We implicitly accept that actors are essentially lying to us on screen for the sake of entertainment. If we can accept that the political actor is simply acting (lying for the sake of capturing attention), then it seems logical to allow it to merge with the entertainment sphere.
And entertainment has been borrowing from politics for years. Steven Colbert effectively played the part of a republican on The Colbert Report. If Donald Trump can bridge the gap between capitalist imperialist to acting by way of The Apprentice, who's to say he's not qualified for public office acting gig?
Up next after a few words from campaign donors: Obama on Running Wild with Bear Grylls.
Wolfram Alpha + LaTeX revisited
I mentioned before how useful it can be to evaluate numerical LaTeX expressions with Wolfram Alpha using Alfred. I still do that a lot, but now that I have a Wolfram Alpha pro subscription, I'm spending more time on the WA website itself to take advantage of features like calculation history.
Another benefit of being on the website is being able to visually see how WA interprets the code I enter. This is a fantastic way to verify that I entered the code that I thought I entered—especially for longer expressions that go beyond the visible boundary of the input field.
A recent example: I wanted to evaluate:
\frac{1677 - 1251.76}{1 + \frac{(1-0.004)}{1.04} + \frac{(1-0.004)(1-0.005)}{1.04^{2}} + \frac{(1-0.004)(1-0.005)(1-0.006)}{1.04^{3}} }
The first time I copied this expression into WA, I accidentally missed the very last bracket, probably because there was some extra space in front of it.
<img src="/img/img.png" alt=""/>
I was able to quickly see that something was wrong since I intended the 1677 - 1251.76
term to be in the numerator.
So I tried again, making sure to capture all of the code, and got what I was looking for, including the numerical value of the expression, 113.41.
<img src="/img/img.png" alt=""/>
Being able to evaluate these expressions on the fly is a godsend and greatly reduces the chances for typos in the final document. As an added bonus, since WA generates a unique URL for each query, I can copy it into my .tex
file as a comment for later reference. For example, here's the calculation above.
Office 2016 for Mac impressions
I finally loaded Office 2016 on my Mac. Here are my thoughts.
Pros
Cons
In summary, Office 2016 for Mac feels like a significant visual update, which makes the experience of using it more consistent with other Mac applications—significantly more so than previous versions of Office for Mac. If you aren't a power user of VBA, macros, etc., it's probably a sensible upgrade. If you don't want Office 2016 to be in constant communication with Redmond, install Little Snitch.
Defer projects, not tasks
Lots of good, practical, salt-of-the-internet advice from Brett Kelly on using OmniFocus. My favorite is "defer projects, not tasks." On some level of consciousness, I figured this one out a while back, and it's made my OmniFocus perspectives several orders of magnitude more sane.
I don't enter a specific date to defer entire projects. I simply set their status to "on hold," which keeps a placeholder for them in my project list, but keeps the individual tasks from fighting over my attention with active project tasks.
I think deferring projects also promotes a more project-oriented mindset. That is, it helps funnel tasks into goal-oriented buckets. If it's not obvious which project a task belongs to, there's an excellent chance that the task 1) isn't worth your time or 2) belongs in a more calendar-like medium like Reminders.app.
By the way, Brett just released a free OmniFocus book, too.
We ship with pretty decent video software
Dr. Brian May explains how we're able to see depth in images of Pluto, despite the fact that New Horizons only has one camera lens.
It's based on a simple photoshop hack that our brains perfected long ago—in real-time no less. Our brain can natively import two HD video streams, stitch them into one, then export them as a single live feed of the world around us. Continuously, without any buffering.
I guess we've managed to do more with our star dust than Pluto.
"Work"
Modern work requires convincing the mind that work happens in this virtual box, but not in this other virtual box appearing on the same physical surface.
We killed leisure
We think we have less time than ever, but this is only an illusion. The Economist:
Ever since a clock was first used to synchronise labour in the 18th century, time has been understood in relation to money. Once hours are financially quantified, people worry more about wasting, saving or using them profitably. When economies grow and incomes rise, everyone’s time becomes more valuable. And the more valuable something becomes, the scarcer it is.
In some ways we're living in the mushroom cloud of a productivity time bomb that was first wired by the Protestant work ethic. It just couldn't go nuclear until there was enough technology to mostly replace physical work with knowledge work.
Taken to its ultimate conclusion, if technology commoditizes all but human judgement (the purest of knowledge work products), the perceived value of time in a capitalist culture will approach infinity. In other words, our own attention will be the only value left to be added over technology.
Rather than increasing leisure time, our technological innovations may enslave us to our own inverted perceptions of value—paradoxically leading us to a state of total time poverty. The more we can do with any minute of the day, the more prohibitively expensive leisure time will become.
I want to say more. Way more. I just don't have enough time. I mean, it's not like you're paying me to write this.
Systematic
Brett and I rambled nostalgic about Notational Velocity forks, ran barefoot across the socio-political-economic mine field of U.S. health insurance, and discussed the power usage of mind maps in almost perverse way. I'm of course referring to episode 143 of Systematic, which I had fun being a part of.
How Humans Save
I've always felt like the most interesting aspect of investing and finance is the wealth of data it generates on human behavior. Securities markets and investor sentiment shed a lot of light on behavioral patterns long before the internet, social media, and Big Data.
One of the most interesting facets of personal finance is the savings decision—the cognitive exercise of time-shifting wealth and income. As human longevity increases, this has only become more fascinating and rife with logical error.
Vanguard released a lengthy report, "How Americans Save," describing recent trends in the behavior of defined contribution plan participants (e.g. people with 401(k)s). The data in the report highlight one of the most important findings (in my opinion) in the field of behavioral economics: the tendency to rely on default choices.
From page 22 of the Vanguard's report:
Faced with a complex choice and unsure what to do, many individuals often take the default or “no decision” choice. In the case of a voluntary savings plan, which requires that a participant take action in order to sign up, the “no decision” choice is a decision not to contribute to the plan.
The way most plans mitigate this error in human judgment is to make the decision for participants:
With an autopilot design, individuals are automatically enrolled into the plan, their deferral rates are automatically increased each year, and their contributions are automatically invested in a balanced investment strategy. Under an autopilot plan, the decision to save is framed negatively: “Quit the plan if you like.” In such a design, “doing nothing” leads to participation in the plan and investment of assets in a long-term retirement portfolio.
These are powerful implications if you think about it. Just scale it up for millions of Americans and billions of dollars in retirement accounts. And consider the fact that when companies opt employees into savings accounts, they're also setting a default savings rate:
High-level metrics of participant savings behavior remained steady in 2014. The plan participation rate was 77% in 2014. The average deferral rate was 6.9% and the median was unchanged at 6.0%. However, average deferral rates have declined slightly from their peak of 7.3% in 2007. The decline in average contribution rates is attributable to increased adoption of automatic enrollment. While automatic enrollment increases participation rates, it also leads to lower contribution rates when default deferral rates are set at low levels, such as 3% or lower. (p. 4)
Key takeaway: if you have a 401(k), the current balance is more likely a function of explicit or implicit decisions someone else made for you rather than decisions you made yourself. It's worth spending a little time pondering the degree to which you're allowing someone else to plan your future. Don't be too human if you can help it.
via Reddit
Strangers in a strange time
I think project planning, money management, diet, and exercise are all a lot alike. Strategies for doing any of them well are mostly common sense.
Everyone knows that eating fewer calories and exercising more is better than not. Everyone knows they should save more for the future. Everyone knows they should just do the things on their task list instead of doing something they'd rather be doing in the moment.
Everyone—everyone—knows all of these things. But everyone—everyone–is only good at doing these things well for short sprints of time before their irrational sub-minds mutiny (again). And again.
Why are we so fucking stupid?
We're not really. We've just been grading ourselves using the wrong standardized test metrics. "Common sense" approaches falsely assume that people are closer to clockwork than orange. We aren't machines. We're mostly emotional artifacts of a past when the future was so improbable that it didn't make a hell of a lot of sense to waste time planning for it.
Though we are the supposedly self-aware species on the planet, we still have a long way to go before we really figure ourselves out. Fortunately the field of behavioral economics is putting us on a better course.
One of the most interesting results I've seen in the last few years came out of a study lead by Hal Hershfield. He found that people make better—and more committed—decisions about retirement planning if they are shown hypothetically aged images of themselves. He found that when we think about our "future selves," our brain activity is essentially the same as when we think about other people. So this "aged self" hack was a way of making the mental image of our future self more personal to us.
Of all the write-ups on Hershfield's findings, I like Alisa Opar's the best:
It turns out that we see our future selves as strangers. Though we will inevitably share their fates, the people we will become in a decade, quarter century, or more, are unknown to us. This impedes our ability to make good choices on their—which of course is our own—behalf. That bright, shiny New Year’s resolution? If you feel perfectly justified in breaking it, it may be because it feels like it was a promise someone else made.
Even though Hershfield's study was done specifically in the context of financial planning, I don't think it's that much of a stretch to hypothesize that this same sort of logical fallacy plagues project planning. To me, it's a very rational explanation for the irrational self-abuse we impose by giving our future selves insanely numerous and complex instructions via task management systems.
If it's human nature to feel better about dumping crap on someone else, there's little guessing left as to why so many things we plan for ourselves never happen.
By the way, if you're interested in more conversation about the future self problem, listen to David McRaney interview Elizabeth Dunn on the You Are Not So Smart podcast.
Smart Title Case in Sublime Text
Dr. Drang's recent post on how to title-case text in Drafts reminded me of one of my most-used Sublime Text packages. Matt Stevens's sublime-titlecase adds a Smart Title Case
menu command that converts text of any case to title case. It's powered by a python script derived from John Gruber's original Title Case Perl script.
I probably use this command at least a hundred times a week because it works so flawlessly to convert text into a consistent title case format.
One common use: I often sketch out a list of headings in a LaTeX document before filling them in. No matter how I get the headings in—by voice, copy/paste, or just speed typing—I don't have to worry about the case until they're all in. Using a keyboard shortcut I mapped to Smart Title Case
, I can convert every line to title case with a single key command (all at once) using Sublime Text's multiple cursors and the Smart Title Case
command.
How do I love Sublime Text's multiple cursors?
Let me count the ways. Actually, there are way too many reasons to count. I love using multiple cursors in Sublime Text, especially for writing LaTeX. Just one example: quickly counting columns in a table (or quickly counting anything I've selected).
<img src="/img/img.png" alt=""/>
So without using any brain power at all, I know I have 9 total columns, 8 of which are right-aligned. Make tables all day, and see if this isn't helpful.
Duck and search
Like a lot of other people apparently, I'm using DuckDuckGo more and more. It's not so much that I'm boycotting Google over privacy concerns. I just like supporting different business models for doing things. Diversity benefits technology as much as it does biology and society.
On my Mac, probably nine out of ten searches go through Alfred, so switching from Google to DuckDuckGo is as simple as typing duck
instead of goo
into Alfred, which stocks keywords for both.
<img src="/img/img.png" alt=""/>
On my iPhone, I mostly use Launch Center Pro for quick searches. Adding DuckDuckGo is as simple as making a new action with DuckDuckGo's URL:
http://duckduckgo.com/?q=[prompt]
<img src="/img/img.png" alt=""/>
Time-caring theory
All projects yield a pair of quantifiable results:
Projects are fundamentally forecasts. They are best guesses of future events leading to the hopeful completion of some goal. Like any forecast of future events, projects are subject to two broad sources of inaccuracy:
If people were perfectly rational beings who incorporated all available information, including their own past mistakes, to plan projects, I would expect the distribution of time spent on projects to look something like this:
In other words, there would be equal numbers of projects completed ahead of and behind schedule. The actual completion time would mostly be a function of external random events beyond the project planner's control.
I think that on some subconscious level, we imagine the future success of a project in terms of a symmetric normal curve. We believe our best estimate of the completion time is really good—even when past experience looks more like this:
We don't really need any formal theory here. A little intuition is all it takes to see that the uncertainty of a project's completion time is a function of the number (and uncertainty) of all of the project's sub-projects.
Therefore, typical projects aren't just forecasts of the future, they're a series of nested forecasts of the future. As each sub-project's actual completion time comes in over its estimated completion time, the total project's completion time skews ever farther to the right.
When confronting our behavioral patterns in this way, it would seem like this should an easy problem to solve: we need to be more realistic with time estimates.
It's not that easy.
We are finally getting smart enough to know we're kinda dumb
Unfortunately, knowing that we should be better at estimating time and actually doing it are vastly different neurological notions. When it comes to accurately planning projects, we are swimming upstream against powerful psychological currents. Our brains weren't designed to map the future accurately beyond more than a few time steps.
These behavioral biases have been well documented under the so-called planning fallacy. Our minds evolved to have an overconfidence bias and hyperbolically discount the benefits and costs of far-off events. We're able to do a decent job of forecasting our immediate future, then we implicitly hope that only roses grow beyond our field of view. We block past experience from our mind, and repeat the same mistakes over and over and over again.
To be clear, I'm not talking about stupid people. Just people. These are fundamental features of the human mind. We are poorly adapted to do the kind of project planning that modern work requires. Our priorities dramatically and inconsistently change with time, and this is exacerbated by a modern world where all of our basic needs are met, leaving us to create and manage extremely abstract priorities to deliver products and services that are mostly exchanged in our minds.
Simply put, projects are complicated. Far more complicated than we give them credit for. We can't expedite human evolution, so let's bring projects back to us. Let's dumb them down for the emotionally handicapped creatures that we are.
One idea that makes sense to me: let's look at the characteristics of successfully completed projects and see if we can recreate those conditions for other projects. It's easier than it sounds.
First, care. Then, schedule.
Once upon a time, Merlin Mann said "First, care. . . Because, in the absence of caring, you'll never focus on anything more than your lack of focus."
No one can objectively argue against this caring principle. The catch: caring is entirely made of human emotion. At one extreme, caring is the arousal induced by productivity porn—you know it when you feel it, but you can't measure it. At least not like time. Caring is a very different substance than time, which in many ways is the antithesis of emotion. But both time and caring must be present to complete a project—any project.
In words:
Further, time is independent of caring, but caring can be negatively correlated with time available. If time is a river, we're more likely to care to work harder when the precipice of a waterfall is in sight. In other words, deadlines.
Caring is a function of much more than time available. Any happy or sad stimulus that motivates you to do something is a source of caring—from the pleasure of reading your Twitter to the cold steel of a metaphorical corporate revolver against your temple as you write a TPS report.
An economist might call caring the "utility" of doing a project. It's the sense of satisfaction you get just from carrying the project out. Which is actually the secret. Projects are ostensibly about reaching future goals, but what we really experience in total are the emotions of each project's steps. That's where we spend all of our time anyway.
If you don't care about what you're doing, the amount of available time doesn't matter—much less any effort you spend on project planning, contextualizing, and pseudo-prioritizing project steps.
I bet most people could get incredible things done with only a notebook and a calendar. This is fundamentally the direction my personal project management systems have been heading in the past year, even though the notebook part isn't explainable in a few lines. Maybe I'll do that later, if I care to.
Clockwise
I'm a utility guy. Wait, who says that? Not normal people. I know; I'm not normal—well maybe normal enough to know that nobody cares more about your Apple Watch face than you do. But that's OK. You have to look at it the most. You should care.
It's been amusing to me just how much attention all-else-equal smart people have spent resolving which, if any, of the clock faces offered in watchOS pre-2 are the best.
The fact that this is even a discussion at all is an interesting moment in the history of timepieces. Now that smart watches are a thing, "digital" and "analog" are user interface choices, not immutable consequences of hardware design.
It's not that I have anything against a modular or digital rendering of time. I just find analog clock faces—even animations of them—more pleasing to look at. In some ways, they're more practical too. When it comes to simply being aware of time, digital clocks offer unecessary precision.
I think that most natural human tasks can be comfortably dimensioned in 30–60-minute segments. Simply seeing how far the minute hand is from the top or bottom of the hour is usually enough for me—without even consciously processing the exact number of minutes. I think this is just more natural for the human mind, whose current form evolved long before the dawn of timepieces.
Our natural sense of time is tied to progression along predictable left-to-right paths: words across a page, days across a week, clock hands across the periphery of a clock face. In fact, clockwise motion describes the most predictable pattern observable to early humans, who where themselves hands on a clock. As Donn Lathrop explains:
Clockwise and counter-clockwise as we now know them seem to have derived from an accident of---as the real estate dealer said---location, location, location. In the Northern Hemisphere (in what is now Iraq), where the cradle of our civilization was rocked and the first written records were kept some 4,000 years ago, the early thinkers and teachers noted that their own shadows moved from left to right, as does the shadow of a stick or a sundial gnomon move from left to right during the course of the sun across the heavens.
So while analog clock faces on a smart watch are skeuomorphs of mechanical analog watches, so too were mechanical analog watches skeuomorphs of sundials.
The analog clock face captures the physical manifestation of time in a way that a digital clock can not.
I think a circular clock face also says something about the senseless symmetry of time within the scope of our lives: a well-worn path. An infinite loop. Every day, we make the same circular trip together—at the same rate.
The clock faces we carry aren't just little models of sundials; they tell our entire history in the universe. For all our worldly accomplishments, we're still strapped to the tiny blue tip of a year hand, swinging around a giant nuclear time bomb.