Tech Blog

The technology team behind Think Through Math

Manda Brown

Meet the Team: Manda Brown

Hi! I’m Manda Brown, a software developer at Think Through Math, where I enjoy working with an amazing team to build great software. Although I love my job, occasionally everyone needs a change of pace, and I’m just returning to the office after a week-long vacation.

Last summer, my then 14-year-old daughter got a taste of bike camping over a weekend, and enjoyed it so much that she convinced us to make the trek from our home town of Pittsburgh, PA to Washington, DC. We took six days to do it, travelling 335 miles on the Great Allegheny Passage and the C&O Canal Towpath; bikes heavy with gear, food, and water. It was an amazing adventure! I’m so happy to have done it and to have been able to give that experience to my daughter. I’m also happy to be back home again, where I have a shower and a laptop.

Pedalling along the trail for hours a day leaves a lot of time for observation and contemplation. Beautiful forests sheltered the trail; it wound beside and around sinuous rivers, threaded through occasional towns, and eventually led us to the capital. Math was all around us; often the setting inspired thoughts about things like the fibonacci numbers apparent in natural growth of the plants, the diversity and population cycles of the creatures we glimpsed as we passed (or swatted when they took a bite), the complex and beautiful math that describes the swirling currents of water, or the stresses and forces at work to create and expose the varied rock formations sporadically embellishing the scenery. And the evidence of math informing human activity was everywhere from the canal and towpath itself, to the railroads, to the architecture that ranged from simple and functional to the ornate and grandiose.

We used math a lot too. There were very important questions to answer: where will we stop for snacks? Do we need to stop in town and acquire more calories and deliciousness? Should we refill our water now, or can we wait until the next opportunity? How long should we expect to be on the bike each day? Will we beat the thunderstorm edging ominously closer on the radar? (We didn’t.) We were constantly exercising simple math involving time and distance to help us manage our resources and energy.

But my favorite trail math is even simpler: subtracting one. My daughter and I wouldn’t really consider ourselves athletes; we like to ride our bikes, but this was by far the most challenging ride we’ve ever undertaken and it wasn’t always easy. Each mile marker was a small victory. And they added up to larger minus ones. One mile at a time until we got to the next stopping point. One stretch and water break at a time until we got to the next meal. One meal at a time until we got through the next day (at day 5, we were pretty happy with our progress; but stayed at a hostel with a bunch of amazing Appalachian Trail through-hikers whose time units were months!). One day at a time until we finished one trail, and then the other. And finally one challenge complete, the adventure over.

The best thing about that last one is that the minuend of the expression is infinite. So what’s next?

Keith Weightman

From “SYS 2064” to “Git Push Origin Master”

HAHAHA!  Look at that couch!

Howdy! My name is Keith Weightman. I’m a Senior Software Developer (and Man of Mystery) here at Think Through Math. I have been programming pretty much since I was around 12.

Dig that C64!

Well I mostly played games, and only dabbled in code. Things got serious when I double majored at UPJ in math and computers, which turns out is a great fit for working here at TTM!

So what does it mean to be a “senior” developer? Well basically since I have experience on many programming languages, many databases, many development processes (I even go back to the waterfall model) - I have seen many scenarios and design methods that work and don’t work, and can advise as such going forward.

At TTM, being senior can sometimes just mean older! Back in 2012 our CTO Jim Wrubel chose to take our team, and company, in a different technical direction. The conversation went something like this:

Jim: “So Keith, you know how we hired you 6 weeks ago to do .NET and SQL Server?”

Keith: “Ah, yeah?”

Jim: “Well we decided to take a different technical path. We are going to convert everything to a Ruby on Rails app with a Postgresql back end. You in?”

Keith: (After wondering what Ruby on Rails was). “Sure!”

Jim’s wise decision meant many changes and challenges for TTM and myself:

  • TTM closely follows a ROWE management strategy. This is based on results, not necessarily focusing on how you get there. You can have more flexible work hours, but you also sometimes need to work at non-typical times and places. This is rare in Pittsburgh, and I am blessed to be a part of it, as I am divorced with 2 children and have a complicated non-standard home life. All my previous positions were a standard east coast 9-5 type of day. Time for me to adapt.

  • I’m more comfortable with tightly coupled languages where the compiler forces syntax. RoR is loosely coupled, not so much compile checking, and components know little or nothing of other components . Time for me to grow.

  • The Ruby community strives and excels as an Open Source community. It’s a simple effective mindset - “Share with others what you love to do”. What’s this Open Source thing you say?

I came from the opposite mindset (Department of Defense, Seagate) where they want you to hide and secure your great work so no one can steal it. This is the most challenging change for me, but guess what? Time for me to change.

  • I didn’t know Ruby! We hired many incredibly awesome developers who are familiar with Ruby and the Open Source style. So I’m the old dog now learning new tricks from these young spring chickens! Fortunately, I don’t have much pride and am happy to learn from them. Time to learn!

It’s been a year or so and I’m still changing, and a little more “senior” than I was when Jim and I had our talk. As you will find out if you take a career in software development, things are always advancing. Being “senior” means you are always growing, always adapting, always learning, and often changing!

Meet the Team: Joel McCracken

Hi! I’m Joel McCracken, a software developer at Think Through Math. We developers have been trying to introduce ourselves so that people outside of our team can get to know us. I have worked on a number of features in the Think Through Math application, including the equation builder and the calculator.

I love programming. I think it is one of the most rewarding and intellectually satisfying pursuits that exist! In fact, next to reading, it is the most important thing that I have ever learned to do.

A little while back, I was part of a conversation with our CTO, James Wrubel. James mentioned to us that he didn’t realize at first how much programmers care about mathematics education, and the mission at TTM would make it eaiser to hire programmers to work on our system.

This conversation got me to thinking: Why do I care so much about math and education? I don’t think the connection between math and programming is obvious. Often, we do not need to use much math in our day-to-day work as software developers. At a glance, it would seem that programming and math do not have all that much in common.

Conversely, computer science is considered a sub-discipline of mathematics. Part of a computer science education typically requires classes in some advanced mathematics. And, computers were developed in order to help humans perform math faster. So it would seem that mathematics is somehow central to what a programmer does. How do we reconcile the fact that we do not need to use advanced math in what we do, and yet math is fundamental to programming?

Here is my answer, in two parts. Back in school, I enjoyed some parts of my math classes. Math helped me to understand parts of the world. I liked learning how things worked and related to one another, and how we can use math to figure things out. Many of the problems we had to do were a certain type we called “word problems”. These problems presented a scenario, and we had figure out how to apply what we are learning to that scenario.

Similarly, developers solve problems with software. We need to ask the right questions, look at code and information available, and then develop solutions based upon what we have discovered. Basically, we do word problems all day, every day, but with real life challenges.

However, I believe there is a more important and subtle connection between math and programming. Programming a computer requires a very rigorous way of thinking. In a mathematics problem, if you write a 4 instead of a 5, forget to carry the one, etc, you will come up with the wrong answer. In software, if a developer forgets to “carry the one” in their code, this creates bugs. Of the things we learn in school, mathematics is the only thing topic that is remotely similar.

In fact, you may be familiar with a recent bug named “heartbleed”. This bug left most of the world’s information exposed to prying eyes. The cause of bug might be likened to forgetting to carry the one! So, exactness and precision are very important to programming.

In our society, mathematics is the gateway into programming. It trains you to think in the way that programmers must think in order to build software. Personally, I hope that the work I do at Think Through Math will help some students become better at math, and through that comfort will be able to experience the intellectual wonders of the world, such as programming.

Jeff Koenig

Meet the Team: Jeff Koenig

Hi web surfers! My name is Jeff Koenig and I am the Quality Assurance Manager and relentless automator at Think Through Math. I’ve been with TTM for a little over a year now and I’ve loved working here. In my time here I’ve worked setting up a scalable jenkins continuous integration environment, load testing and have helped maintain the quality of the software that our awesome team produces.

How do I use math in my everyday life? Well something that has been very important to me in many aspects of my life is efficiency. Efficiency is basically the cost of particular task. This cost can be a combination of time, money, and effort, among other things. This turned out to be a very important value to have while working as a line cook. Which is what I did from the time I turned 15 up until I graduated from college. But wait, what does any of this have to do with math? Well the main job a of a line cook is to reproduce a number of menu items as fast as possible without sacrificing the quality or taste of the dish. So it becomes very important to figure out how you can assemble a dish in the least amount of time possible. This requires analysing the process of putting together a particular dish, and shorting every step of the process that you can. This means that anything that can be prepared ahead of time is a must. After that, the next major time savings is having things located as close to where they will be used in the process as possible. Doing this will keep you from having to waste precious time taking steps away from your station and the dish you are preparing. When you do have to take the hit of walking a couple steps away from your station, it is important to make that movement as effective as possible by getting anything you are going to need from that area so that you take that trip as few times as possible. Then, continue you tearing apart the longest step of the process and making it shorter and before you know it, you are putting out beautiful dishes in a matter of a few minutes.

As it turns out to be an excellent value to have in any part of your life. Finding the most effective way to perform particular tasks in your everyday life and in your work will make you a much more effective person and save you time and money.

Tim Bickerton

Meet the Team: Tim Bickerton

About Me

Hi everyone. I’m Tim Bickerton, Project Manager at Think Through Math, and I’m just about to celebrate my one-year anniversary with the team. I’m pleased to be able to spend every day working with an extraordinary group of people all committed to helping students learn math.

What’s a Project Manager do, you ask? Together with Jim, our CTO, and our Product Management organization, I help to ensure our talented engineers have a clearly defined feature roadmap and are kept free from distraction to focus on their creativity. I also partner with TTM Customer Support to ensure that customer feedback and issues are addressed in a timely manner with world-class technical customer service. In the past year, I’ve partnered with TTM engineers to develop new reports and dashboards for teachers and administrators, new problem types for students and enhanced experiences for parents, among other projects.

Jim promotes a culture of teaching and learning at TTM, and that’s a culture I’m proud to stand behind. As a part-time adjunct instructor of IT Management at a local university, I take my job of demystifying technology to my undergraduate students very seriously. My goal is to provide my students the tools to be successful, and leave my courses with an understanding of just how much technology complements their professional and personal lives. Teaching and working at TTM provide me avenues to “give back” to students by providing them with the tools, feedback and instruction they need to be successful in math, technology, or even life!

So, how do I use math in my life every day?

Calculating my students’ grades, of course!

Jim Wrubel

Working at Think Through Math

When we screen candidates, they frequently ask, “What’s special about Think Through Math?”

Developers hate to repeat themselves so this post is an attempt to describe what we think makes working at TTM different than other companies. Though this post is focused on working on the Think Through Math product development team (engineering, QA, project management, UX), many of the concepts are valid for the company as a whole.

Our culture

We’ve developed a very cohesive team over the years. We’re all working at Think Through Math because we find the work meaningful and we believe in the work that we do. We also have some very specific values that are important to us as we work to fulfill our corporate mission:

  • Everyone is a teacher, and everyone is a learner. Literally half of the company is made up of current or former certified teachers, but even for those of us who aren’t certified, teaching and learning are core values. Our staff come from diverse backgrounds - teachers, publishers, designers, developers, but we all share an interest in teaching and learning (and math of course!). From this value, we get pair programming and peer code review as practices. We also get involved with the larger community (local and online), since we realize that the answers we seek are not always held by the people we work with directly.

  • Continuous Improvement. We follow many Agile processes; we do a daily standup (and some sub-teams do their own separate standup), we organize our work in to iterations while shipping features when they’re ready, and hold iteration demos. But these are just actions. As a team, the thing we care most about is continuously improving everything we do: git workflow, deployment, operations support, etc. Our git structure reflects our principle of continuous improvement. We have two primary branches for our projects: master (which reflects what’s in production) and rc (release candidate).

  • Giving back. We rely heavily on open source software and platforms. It would be irresponsible of us to not try to find opportunities to contribute back to that community through code, knowledge sharing (including this blog), responding to Stack Overflow questions, participating in local meetups, and so forth. In many ways, this goes hand-in-hand with our focus on teaching and learning. In addition to corporate goals, every member of the team is expected to contribute to at least one open source project every year and to give two presentations on topics in their area of expertise at national/international conferences, regional meetups, or even company lunch-and-learns.

The hiring process

For technical positions, our hiring process is two steps. The first step is a group interview with 5-6 of the current development, QA, design, UX, and project management staff. We do a group interview so that we don’t pester candidates with the same set of questions, and because one of the things we look for in a candidate is their ability to gel with the team.

We realize it can be intimidating to do a panel-style interview but they are really informal. At one point during an interview we stopped to pull up the Lego interpretation of Eddie Izzard’s Death Star Canteen skit.

Great developers will always have options when they’re looking for a job, and it’s important for us to make sure the candidate knows what it would be like to work at Think Through Math as much as it is for us to get to know the candidate.

For the second step of the interview process, we ask candidates to give a 30-minute presentation on a topic they’re passionate about. We prefer the topic be nontechnical since technical topics can be dull and we value diversity of interests. Our interview process matches our corporate culture; we are all teachers and our interview process selects for people who share that interest. Over the past year+ we’ve had this policy we’ve had presentations on coffee, henna tattooing, bowling, watches, and vinyl records, among others.

One thing we don’t do in interviews is ask people to write code, pseudo or other. We favor candidates with active open source contributions (giving back to OSS is another of TTM’s values), so typically by the time a candidate comes for an interview we’ve had a chance to see their code already. There’s not much we would learn in 45 minutes of live-code interviewing that we couldn’t see from someone’s OSS contributions.

Another reason we don’t do code in interviews is that it’s an unrealistic scenario. TTM has an aggressive roadmap, but I can’t think of a situation where we would have a 45-minute deadline to write code. It’s too easy for a live-code interview to turn in to a series of gotcha questions. Old standbys like “How do you reverse a string?” become meaningless when the language includes a .reverse method. We want our interviews to reflect what people will actually do day-to-day. If you’re working on something and don’t immediately have the answer, you search for it. Usually you end up on Stack Overflow, find something that works, and move on. In an interview situation how would a potential employer feel about a candidate going to Stack Overflow? The process of writing code in an interview feels awkward, so we tend to avoid it.

Summary

If you’re interested in working at Think Through Math, or you’re a candidate wondering what the interview process is like, hopefully this post is useful to you. If you are interested in working here, check out our openings and drop us a line!

Jim Wrubel

Pennsylvania Dev Team Uses This One Weird Trick to Save 67% on Their New Relic Bill

We love New Relic. It’s saved us countless times in production, it’s the tool we use when we turn developer attention toward optimization, and it’s the gold standard for application monitoring. The only issue we have honestly is that as we’ve grown the monthly spend on New Relic starts to get steep.

Our primary platform is a rails app. We help nearly 10% of the U.S. math students in grades 3 through Algebra, so on any given school day we’re pushing a lot of traffic. At peak loads we run a pool of 20 web servers on AWS to handle our traffic. We typically only need max capacity for 6 hours a day as our night and weekend traffic is much lower, but New Relic charges by the server for their Standard and Pro accounts (we use Pro and would recommend it to anyone). Although there’s some allowance in their pricing model for ‘burst’ models, at list price $149/mo per server on an annual contract we would end up spending nearly $3k per month for their service.

At Think Through Math we’re always looking to spend money wisely. Anything we save can help us reach more students at lower cost or add staff to help us add features more quickly. So any infrastructure spend is a candidate. Our app server pool uses random request routing (yeah, we know). But an interesting side effect of random routing and a large pool of server targets is that a statistical sample of overall traffic becomes a valid representation of the whole. In other words, of our 20 servers if we only reported data to New Relic from 7, the data generated would be equally actionable.

New Relic provides average, median, 95th percentile, and 99th percentile values for overall browser page load time and app server response time, as well as breakdowns for frequently accessed controller methods, on the default dashboard home page. Error rate is also reported on the dashboard. On the Web Transactions page there are more detailed breakdowns for controller methods and on the Database tab there are similar breakdowns for individual database calls. All of these are calculated based on a total population of calls. As long as your throughput is high enough on the servers that are sending traffic to New Relic, these statistics will closely mirror the population as a whole. Likewise New Relic’s Apdex value is calculated based off of the overall statistics so the value should be the same for the subset as the whole.

So what’s the downside? If you use New Relic ppm or rpm values as indicators of capacity or throughput, you’ll need to translate the values for the subset to the population as a whole. We post metrics to an internal dashboard, so in order to make our statistics match we are using two New Relic accounts, one Pro and one on the free Lite tier, and combining the values from both API calls. This gives us the advantage of being able to use New Relic Lite features for the rest of the servers, which are very robust even with their limitations. Also if you use any of the New Relic reports (SLA, Capacity, Scalability, etc) again the absolute values reported by your subset will not reflect your true throughput.

Implementation

New Relic uses an API Key to match data from a reporting server to an account. In our environment we use a YAML file that’s loaded during initialization. The key is, in order to set this up you need a method for a specific server to effectively determine the total server count for its type during initialization. We use Scalr for managing our production environment. Their API provides SCALR_INSTANCE_INDEX, so we can use the modulo of that property to determine whether the server should register with the Pro New Relic account. We added a different app on the Lite plan and use modulo to balance between them - every third server gets the Pro key:

license_key: <%= ENV["SCALR_INSTANCE_INDEX"] % 3 == 0 ? ENV["NEW_RELIC_LICENSE_KEY"] : ENV["NEW_RELIC_LITE_KEY"] %> (from config/newrelic.yml in our rails app)

On boot, depending on the index of the server it will pick up the Pro or Lite key. We’ve set up gists for implementing on Scalr and Heroku for reference. These are Ruby/Rails-specific, but we welcome feedback on implementations in other platforms and languages. If we get any we’ll update this post. In any event this should be enough to get you started - hit up the comments if you need more guidance.

We realize this is gaming the New Relic system to an extent by signing up for multiple accounts at different tiers. At TTM we’re big fans of paying for software and services that we use (either through cash or contributions to OSS), and if we don’t need Pro monitoring on our entire farm it should be easier to specify only the servers we want to use for Pro. Right now it’s not, but this enhancement made it possible for us to control costs and get the same value out of New Relic regardless of how big we get.

Jim Wrubel

Moving From Heroku to AWS at Think Through Math

Think Through Math launched a completely rebuilt version of our core math instruction platform in time for the 2012-2013 School Year. We launched on Heroku - it’s the platform that provides the least deployment friction and has the most developer-friendly interface for managing production environments. Almost from the beginning we were plagued by performance issues. TTM added some very large customers at the same time we launched our application, and that exacerbated the problem, but as it turns out the greatest source of our production issues were the ones that the Rap Genius incident exposed around request queuing at the router level. Once we found the issue we were able to mitigate it by switching to double dynos and dramatically increasing our dyno count, but even our best-case scenario had 500-700ms of request queuing at peak load.

After consideration of options we made the decision to transition away from Heroku. As a company we believe the public cloud is the right place to host our application at this stage in our growth, and for a host of reasons (some of which we will cover in future posts) we decided to use AWS vs. other cloud providers. There are many things heroku does really well with their platform; we wanted to maintain those features in our new platform as well. We made the following list of features we needed to have on AWS:

  • Deploy with git
  • Environment variables in the same manner that heroku uses them
  • API ‘parity’ - ideally mimic heroku’s CLI commands to reduce learning curve
  • Templatized server roles
  • Cron support
  • Rails Console and psql console through CLI
  • Centralized logging through papertrail
  • Scaling through the CLI
  • Database backups
  • Follower databases, including fast db cutovers

After some research we settled on Scalr as a replacement for much of the functionality we got from Heroku. Scalr provides AWS templates for our rails stack, background workers (we use Sidekiq Pro), and Postgres. All of our other production services we left as cloud solutions: redis, logging, and solr.

Scalr allows us to pick instance sizes based on our needs; after some experimentation we settled on c1.xlarge for our web and background tier. We use Unicorn for our web tier so based on the available memory and our app size we set the worker processes per server to 20. We used scalr’s Postgres 9.2 template running on m3.2xlarge with the EBS-optimized IO feature, and we tuned postgres settings based on our app parameters. One of the things we always suspected about running our app on Heroku is that our inability to tune work_mem and other postgres attributes was responsible for some of our performance limitations, and caused us to need larger databases overall to support our traffic. Scalr supports slave databases that are similar to heroku followers, although a bit more limited in that it’s a one-to-one ratio. If the master database runs in to problems Scalr does automatically promote the slave to master and build a new replacement slave.

At the routing layer we settled on ELB for load balancing with random routing to the rails servers. If we had more time we might have gone with ha_proxy, but as an education platform we have a narrow window in the summer when school’s out to launch and test significant changes like a platform switch.

Scalr uses ‘farms’ which are conceptually similar to Heroku apps, and inside each farm we were able to add roles based on the templates we defined. We added two special roles; one for cron (to mimic Heroku scheduler) and one for ad-hoc remoting to mimic heroku console. Each of these roles runs a full stack of our app and the servers are updated along with the rest of the app whenever we deploy.

Speaking of deployments, we again wanted to follow the model that heroku uses - deploy via CLI and git. To implement this we set up separate private tracking repos for all of our farms in github. Scalr provides event hooks to their application workflow, so we configured each farm to pull from its equivalent repo when it receives a deploy command. One thing we don’t get with this workflow is the ability to watch the deploy process do its thing, but in our experience 90% of that process on heroku was asset compliation.

As a team we’ve gotten used to managing everything through the command line, so an additional goal for our team was to maintain ‘parity’ in syntax for management of our production deployments. Scalr has a gem wrapper for its API, but it wasn’t well maintained. So we forked it and added syntactic sugar to match the heroku API. So where we previously could list our environment variables with heroku config -a APP_NAME we now use ttmscalr config -f FARM_NAME. To build out the API we implemented deploy, rails c, psql, restart, and maintenance, along with some commands we couldn’t use under heroku like ssh and scp.

The results we’ve seen from the cutover are pretty dramatic. According to New Relic our App Server average response is now below 200ms even at usage levels that are 40% over last year’s peak. On heroku even at our most stable we were seeing 500-700ms of request queuing on our best days. We can’t say with certainty why things are so much better. We are using larger rails servers with higher workers per server, so the impacts of random routing are minimized. We also suspect that Postgres tuning makes a big difference. Beyond that we don’t have good answers, but with performance now stable we’re able to refocus attention on functionality, so we haven’t spent a lot of time investigating.

One thing we did not see was a lot of cost savings from the switch from Heroku to Amazon. Costs are much lower on a server-to-server basis, but factoring in support costs, the fact that we now pay for dedicated slave databases, and now we pay for things like backups, storage and data transfer, our overall hosting cost is roughly the same. The big win here for us was stability and response time. Those benefits alone made the cutover worthwhile.

Mario Signore

Meet the Team: Mario Signore

Hey y’all, I’m Mario Signore, a quality assurance guy at Think Through Math. This has been my first job in the technology field since graduating college, and I couldn’t imagine a better place to start. It is such a privilege to work with people who are passionate about education!

When not working I am usually playing baseball. Growing up, everyone would make fun of baseball for being an ‘easy sport’. Football players would say that it wasn’t physical enough. Basketball and Soccer players would say that they ran more in one practice than baseball players do all year. Determined to prove to everyone that baseball was the hardest sport, I turned to math.

The pitcher stands 60.5 feet away from the batter. Lets say the average Major League pitcher pitches around 90 miles per hour.

1 mile = 5280 feet and 90 miles = 475,200 feet, so the pitcher pitches 475,200 feet per hour.

1 hour = 3600 seconds, so the pitcher pitches 475,200 feet per 3600 seconds.

475,200 divided by 3600 = 132 feet per second.

So at this point, the ball is coming at the batter at a rate of 132 feet per second. However, the pitcher is only 60.5 feet away. 60.5 feet / 132 feet per second = 0.46 seconds. That means that the batter has ~0.46 seconds to hit the baseball, which is pretty fast. That’s why baseball players whose batting averages are .350 (35%) get paid millions of dollars to fail 65% of the time!

Carol Nichols

Meet the Team: Carol Nichols

Hi, I’m Carol Nichols, Software Architect and Hammer of Justice at Think Through Math! Since I started in October 2012, I’ve worked on projects such as sharding our database and integrating with Clever. I love my job and getting to work with an incredible team to make software that helps kids learn math!

To introduce ourselves, Jim has suggested that each of us post a bit about how we use math in our everyday lives. After a day of working in the virtual world of software, I enjoy going home and working on the concrete, physical aspects of my house. I try to do a lot of repairs and improvements myself, “try” being the important word!

I especially enjoy building my own furniture. I get to make exactly what I need, and if I use solid wood, it’s usually better quality and lasts longer. But making furniture does take time and thought.

My shelf without sides and the piece of wood that I cut too short

You may have heard the saying “Measure twice, cut once”, meaning you should spend more time checking your measurements that are easy to change before cutting your materials since you can’t undo a cut. In my house, the saying is “Calculate three times, measure twice, cut once”! I have done some math in my head to figure out how big a piece of wood needed to be, checked my math (but made the same mistake when checking), measured twice, cut once, and then found out that my math was incorrect. This shelf in my house still doesn’t have sides on it; I haven’t gone to get a replacement for the wood that I cut too short yet.

Since that incident, I check my assumptions and calculations with a friend and I use a calculator to make sure my mental math was right. Getting the math right helps me to buy exactly the amount of materials I need and saves me from making multiple trips to the hardware store. Math is important!