Designing Data-Intensive Applications Book: Chapter 1 Summary

I start a series of blog posts with summaries about this interesting book: Designing Data-Intensive Applications


What is a data-intensive application?

It s an application where raw CPU power is rarely a limiting factor and the problems are the amount of data, the complexity of data, and the speed at which it changes. It is built from standard building blocks that provide commonly needed functionality.

In this chapter, we see the fundamentals of what we are trying to achieve.


The things that can go wrong are called faults, and systems that anticipate faults and can cope with them are called fault-tolerant or resilient.

Reliability means making systems work correctly, even when faults occur. Faults can be in hardware, software, and humans. Fault-tolerance techniques can hide certain types of faults from the end user.

The system should continue to work correctly even in the face of adversity: hardware or software faults, and even human error!

A fault is usually defined as one component of the system deviating from its spec, whereas a failure is when the system as a whole stops providing the required service to the user. It is impossible to reduce the probability of a fault to zero; therefore it is usually best to design fault-tolerance mechanisms that prevent faults from causing failures.


Scalability means having strategies for keeping performance good, even when load increases.

In a scalable system, you can add processing capacity in order to remain reliable under high load.

Scalability is the term we use to describe a system’s ability to cope with increased load.

Load can be described with a few numbers which we call load parameters. The best choice of parameters depends on the architecture of your system: it may be requests per second to a web server, the ratio of reads to writes in a database, the number of simultaneously active users in a chat room, the hit rate on a cache, or something else. Perhaps the average case is what matters for you, or perhaps your bottleneck is dominated by a small number of extreme cases.

Latency and response time are not the same thing. The response time is what the client sees: besides the actual time to process the request (the service time), it includes network delays and queueing delays. Latency is the duration that a request is waiting to be handled during which it is latent, awaiting service

It’s common to see the average response time of a service reported. However, the mean is not a very good metric if you want to know your “typical” response time, because it doesn’t tell you how many users actually experienced that delay. Usually it is better to use percentiles.

This makes the median (p50) a good metric if you want to know how long users typically have to wait: half of user requests are served in less than the median response time, and the other half take longer.

High percentiles of response times, also known as tail latencies, are important as they directly affect users’ experience of the service. Customers with the slowest requests are often those who have the most data on their accounts because they have made many purchases. On the other hand, optimizing the p99.99 can be too expensive and to not yield enough benefit.

it only takes a small number of slow requests to hold up the processing of subsequent requests an effect sometimes known as head-of-line blocking.

Scaling up (vertical scaling, moving to a more powerful machine) and scaling out (horizontal scaling, distributing the load across multiple smaller machines).

Distributing load across multiple machines is also known as a shared-nothing architecture. A system that can run on a single machine is often simpler, but high-end machines can become very expensive, so very intensive workloads often can’t avoid scaling out. In reality, good architectures usually involve a pragmatic mixture of approaches: for example, using several fairly powerful machines can still be simpler and cheaper than a large number of small virtual machines.

Some systems are elastic, meaning that they can automatically add computing resources when they detect a load increase, whereas other systems are scaled manually (a human analyzes the capacity and decides to add more machines to the system).

While distributing stateless services across multiple machines is fairly straightforward, taking stateful data systems from a single node to a distributed setup can introduce a lot of additional complexity. For this reason, common wisdom until recently was to keep your database on a single node (scale up) until scaling cost or high-availability requirements forced you to make it distributed.

An architecture that scales well for a particular application is built around assumptions of which operations will be common and which will be rare.


Maintainability is making life better for the engineering and operations teams who need to work with the system. Good abstractions can help reduce complexity and make the system easier to modify and adapt for new use cases. Good operability means having good visibility into the system’s health, and having effective ways of managing it.

It is well known that the majority of the cost of software is not in its initial development, but in its ongoing maintenance.

See you in next chapters summaries 🙂

Google Photos API, how to use it and why it will probably disappoint you



Recently I needed to close a Google Apps account, and I tried to migrate albums programmatically.  I’ll document here the needed steps and explain why this Google API is useless for most of us:

First you need an app token, you can get it from Google Console on There you need to register your project and associate API from the library.

You should now have both client_id and client_secret so you can fetch the code quite easily with a OAUTH2 flow:


$client_id = "foo"
$client_secret= "bar"



echo $url

If you open such output URL with a browser you’ll get the $code and with such code, you can just fetch the tokens.

$code = "lol"
curl --request POST --data "code=$code&client_id=$client_id&client_secret=$client_secret&redirect_uri=urn:ietf:wg:oauth:2.0:oob&grant_type=authorization_code"

With the refresh_code you can already do what you need, here you have an example kotlin script I worked on

But finally, I just did it manually zooming out from web client. It happens that Google just offers consents that allow you to manipulate photos and albums created with your own app, so you can’t move around photos between albums created by the official too. This means you cannot organize your library automatically unless you just need to work with photos you would upload with your own app…



My 2019 Coursera courses

2019 was an awesome year for me, mainly because I became father 🤗 but I also found time to keep my learning habit 🤓, something very important after 15 years since my first job in the field. So I’d like to list and elaborate on the Coursera courses I did and why:

  • Conflict Resolution Skills (cert): a good introduction, something essential even if you’re in an individual contributor position but critical in management.
  • Kotlin for Java developers (cert): a great course in order to jump from Java to Kotlin. We’ve been increasingly using Kotlin at work (even for microservices!) so I found it was a good way to review the language in general.
  • Programming Languages, Part A (cert): getting into functional programming was something I wanted to do for a long time, I did some Haskell at uni but that was ages ago and I knew typical few things used in JavaScript or Kotlin but using a pure FP language is a very different thing.
  • Programming Languages, Part B (cert): Part A used SML, this other part used Racket which was a bit parenthesis nightmare at first but it turned to be very fun as I practiced implementing a little programming language something I hadn’t done since university,

If you have a recommendation of any online course for 2020 please leave a comment 🙂

HOWTO see Google Calendar events in yearly view


It turns out I already have booked a few events for 2019 so I wanted to have a yearly view of everything I have. I was disappointed to see that current Google Calendar yearly events view is useless as It’s just empty. There are lots of comments about this issue in this Google productforums entry.

Captura de pantalla 2019-01-12 a las 13.07.50.png

Captura de pantalla 2019-01-12 a las 13.15.33.png
Ron Irrelavent is absolutely right

So I did some search to look for solutions and I found these 2:

  • Google Calendar Plus extension
    • I haven’t even tried as I’m tired of Chrome extensions but seems to work.
  • Visual-Planner project
    • A bit ugly but it works and it’s open-source so this is what I’m using. You can use it without installing it here (you just need to OAUTH to your gmail account). Only drawback is that it does not display names for multi-day events, as a workaround you can create a single event for the first day e.g “Flight to London”

Captura de pantalla 2019-01-12 a las 13.13.09.png
This is something ¯\_(ツ)_/¯

Let me know if you have better alternatives.

I also hope Google implements this..  Hello Google PMs? 🙂

Thoughts about React Native after a few months working with it



I faced the following challenge in January:

  • Porting a complex webapp to native Android and iOS. The web-app to be ported is written in ReactJs+Redux. Besides most of its business logic is in a pure ES5 javascript library.

So in this situation, React Native (“RN” from now on) seemed like the way to go as we wanted to have a working prototype in a month and it should be maintained in both Android and iOS without extra resources.


Some misconceptions(?) about RN that I’ll talk about:

  1. RN is a solution for pure developers who only want to know a bit about mobile native development. (FALSE)
  2. RN means learn ‘once’  and write anywhere (yes but…)
  3. RN might not make sense if you are a proficient mobile native developer (FALSE)
  4. RN in the worst case can perform too badly to the point of having to rewrite everything to native when it’s too late. (FALSE)
  5. RN allows to reuse tons of code among platform (TRUE)
  6. RN is fun (TRUE!)

Let’s divide and conquer the discussion:


react-native run-platform little disappointment

I’ll avoid talking about Expo as I cannot rely on it because I use a few native libraries that require the app to be ejected. So, the first experience creating a RN app is pretty cool, you just do react native init MyNewFancyApp and there you have a hello world which you can run with react-native run-PLATFORM

Do both react-native run-android / run-ios work as you would expect? Well, more-less:

  • react-native run-ios compiles your iOS project, including native dependencies, links the project and runs Metro so that the JS code is bundled and offered by a Node server running on your laptop. Finally it runs your app in a Simulator target.
  • react-native run-android compiles your Android app with Gradle and also runs Metro the same way but there’s a difference: it won’t start an emulator or simulator, it will try to run adb install against whatever is attached to your laptop: an emulator running in Genymotion, a real device connected to USB…

Captura de pantalla 2018-05-18 a las 20.36.31
You get used to keep an eye on this.

This is the first little difference you find between using RN against each platform and this is just about tooling. react-native run-PLATFORM has different behaviour on Android and iOS. Actually, if you want to run your RN app in your device react-native run-android will work fine as “adb install” works both for emulator and devices the same way whereas react-native run-ios is only enough for running on the simulator. You’ll need to run Xcode to run it in your device. You might think “not a big deal” and you’re right but it turns out RN is full of this kind of details that could be improved and make dev experience better.

Captura de pantalla 2018-05-18 a las 20.38.04
My 2 IDE friends always opened for RN work


A RN app is a native app so you’ll need to learn mobile native stuff

You’ll see react-native init basically creates a Gradle Java android project under /android folde, a /iOS project under ios folder and a /index.js to start the RN app.

You might think you won’t need to care about those autogenerated native projects and just about the JS and you’ll be wrong.

Every time you link a npm RN library containing native code (with react-native link) you’ll see how it patches your projects, and sometimes it won’t be enough and you’ll need to do some manual changes to integrate such libraries correctly. This means you cannot be a 0 native knowledge developer to use RN, you’ll need to get your hands on those native projects from time to time. Don’t cry about it, embrace it as this will rarely change as RN is basically about patching those native projects that you scaffold initially over and over again and you’d better check what changes you introduce “automatically” when you link new libraries. It’s a good practice trying to understand the generated patch every time you run react-native link .

Does this mean you need to know both Java and Objective-C to be as productive RN developer? IMHO Yes.


I enjoyed this book.


For Java there are plenty of books you can use whereas for learning Objective-C I can recommend Objective-C Programming: The Big Nerd Ranch Guide (Big Nerd Ranch Guides) 1st Edition.

You can also learn Swift but you’ll see that you need to learn Objective-C anyway so that should be your priority as most of RN libs are written in Objective-C and your /ios RN generated project is in Objective-C.

There are other examples that enforce the idea that if you want to do serious RN job and you only know Javascript you really should learn mobile native languages anyway is that there are simple things such as changing the background of your app or tweaking the splash screen that you simply cannot perform just with Javascript. These are more exceptions than the general rule but you don’t want to limit yourself anyway.

I can mention that you probably need to change AppDelegate.m when using certain libraries, and those libraries typically have some code you can copy-paste there but they don’t tell you how to mix different libraries there correctly, that’s something you need to do with care and at least some Objective-C understanding. E.g some libs tells you to add a continueUserActivity method in AppDelegate.m which can collide with other libraries you had and you might need to figure out yourself.

Captura de pantalla 2018-05-19 a las 11.49.45
Just a simple example of supporting 2 libraries at the same time

At some point, you might even need to create a native module yourself. RN documentation documents two examples about how to do it so you can see that the vision of RN is not about avoiding any native development, it’s more about boosting productivity and sharing business logic and practices among very different platforms.

Debug mode and runtimes

Usually, when you’re developing with RN you will be using DEBUG mode. You’ll soon discover that running your app that way means that your JS code is being run by your laptop browser which is important to have in mind as in Android especially you can hit important differences that will make you introduce extra polyfills. E.g Symbol._iterator issue.

My advice for this is clear: test your JS code often in your device/simulator, don’t live in the browser runtime all the time until a bad surprise comes too late when you thought your code was already ready. You might think, Tests!: well, that kind of problems will be detected if your running those tests with the device runtime, otherwise won’t be detected either 😦

In practice, I’m usually developing with both Chrome and an iOS device runtime and I check from time to time on Android and It’s common finding a surprise to fix. It also depends on which is your prioritized platform, in my case I focus on iOS first and I care about nailing Android with less priority. You cannot assume things just will work in every platform out of the box.


Captura de pantalla 2018-05-19 a las 2.10.16.png

Testing in native platforms is quite mature so I was not sure about whether I would be comfortable with RN in that regard. Luckily it’s pretty good, it takes some time to get used to the way dependency injection is done (basically ES6 imports and props) but Jest testing framework rocks and storybooks + snapshots is pretty cool. Besides, as Redux is the de facto standard to develop RN apps that’s a part which is testable by default as it enforces using pure functions. Perhaps I miss that in RN world is common seeing native libraries with no test coverage at all, my recommendation is checking the project README and the issues first as a project that seems great at first glance might be better to avoid for not being maintained or being buggy. You’ll find there are lots of react-native-foo libs that expose “foo” native APIs but the quality among them is different and there are different libraries doing the same thing with different level of quality.

Organizing your code

About organizing your code, don’t assume you should do it the way you see in basic examples, consider them just a starting point to having something working but not something maintainable in long-term. E.g avoid adding all your business logic directly to your components, favour pure functions (using Redux is a must), extract redux part from component to containers, find a consistent way to organize your styles instead of just having them spread in your components and in general apply any practice you know from other good languages/frameworks even if you rarely see them in common RN examples.

One more thing: sometimes you’ll find you need different code depending on platforms, think carefully about how to do it as the typical “if(platform == ‘ios’)” can be a first working approach but you should try to wrap those blocks in generic classes or components which deal with the differences internally so that you don’t have lots of platform branching in your main business logic.

Preparing builds


Preparing release/alpha RN builds is as complex or even a bit more than preparing classic native ones as you also need to care about the JS bundle. Luckily you can automate with fastlane, something you should really do to avoid the pain of preparing them manually. Besides you can consider using Codepush so that you can avoid preparing new builds unless you’ve really changed native dependencies.


This was one of my main doubts about RN:

  1. Most of your code is in JS and Javascript code does not run in UI thread, it has its own background thread. This is great for someone used to Android development where that’s a typical problem when running your code. In RN if you have bad performance because of your JS code you won’t be affecting UI thread as JS runs in a different thread. Besides the async nature of running your JS code makes deadlocks unlikely, something that I’ve seen happening in Android depending on the practices.
  2. Performance issues are quite easy to detect thanks to libraries like slowlog and snoppy. With slowlog you can monitor whether components take too much time to mount and with snoppy you can watch whether you’re having excessive activity in the RN bridge.
  3. There are lots of little easy details to care of like disabling console logs for production, typically avoid blocking JS event loop, profiling your app performance.. You can read more about them in this nice documentation.
  4. Just avoid doing animations with the Javascript thread, search for native libraries that can help and useNativeDriver flag in animations.


The RN bridge


Captura de pantalla 2018-05-19 a las 2.12.28

RN bridge is an important thing to understand as RN developer. Magic does not exist, in RN either. Your JS code runs in a background thread and your native modules run on their own background threads too so how does your JS code communicate with your native modules?

How is a call from javascript translated to native land?

Well, basically the message object you want to send to the native API that will send the message is serialized so that it can be communicated to the other thread, and then it’s unserialized in the native thread side. That’s basically the bridge and it has a cost. If you are doing too many communication from JS to Native or vice-versa that will degrade your app, those calls are actually batched but It can become a problem anyway. That’s a bottleneck as other code that needs the bridge to run will need to wait.

How can you take care about this? As I said you can use a tool like snoppy to monitor it. If you rarely see high peaks in such bridge, congratulations! You’re achieving native-like performance in your react-native app. Otherwise, you’ll need to do something about it. What? Well, if those peaks are quite rare you can try to improve the UX when they happen… whereas if they really affect the user experience or battery you might need to port some JS code to a native module because that’s something to take into account.

Pure JS libs vs RN native libs

Related to this you’ll often find different libraries that try to do the same thing but one does it in Javascript (always using JS thread) whereas the other one requires adding native code to your android/ios native projects. The pure JS library is probably easier to set up whereas the one including native code will require a react-native link in the best case (+checking the patch carefully) but It can also happen that you need to use Cocoapods and you might need to apply some monkey patches yourself. You need to evaluate each case because such “pain” might pay off. An example of this is choosing a navigation lib for your RN app, I prefer react-native-navigation to react-navigation.

A kind of magic


It can be interesting checking the RN repo to see the magic under the hood.

E.g: In BatchedBridge/MessageQueue.js you can see the enqueueNativeCall method:

Captura de pantalla 2018-05-19 a las 2.52.52.png

In that method, you’ll see method and params data are pushed to _queue and that queue is flushed from time to time. calling  global.nativeFlushQueueImmediate(queue). And what’s next step? If you look for nativeFkushQueueImmediate you’ll see you land into C++ world and you can see the interesting part where the queue is JSON serialized. From there you go NativeToJsBridge.cpp where you can see the implementation of both JsToNativeBridge and NativeToJsBridge. CallNativeMethod is finally run in ModuleRegistry.cpp and after that you’re already in platform specific area, e.g you can see the actual call being performed with reflection in whereas I think that job is done in for iOS.


Fun 🙂

Captura de pantalla 2018-05-19 a las 3.47.45.png

So why RN can be more enjoyable for a native developer?

Basically because you rarely need to recompile your app unless you change the native dependencies, something that you’re not doing N times a day and not even N times a week probably whereas you’re probably changing templates N times a day, or business logic, or assets, or colours… And those things in RN are in JS and you can just refresh and see the changes almost instantly! This makes RN development cycle way more similar to web development as the feedback cycle is way-way faster and you can be way more time in the zone having fun as a happy developer as you don’t need to be multitasking to find other little things to do at the same time your app is being rebuilt, even if your app is small and just takes like 30 seconds to rebuild that’s still way more than what JS code takes to be loaded again. You can even enable hot reloading! and you’ll see how your app changes as you change JS lines.

In Summary

So, when do I think react-native is a good idea as of today? IMHO it worths seriously considering if:

  • You need to port a web app written in ReactJS+Redux to Android/IOS which is quite complex and just runs fine.
  • You want to prototype a native app fast. I avoided talking about Expo but check it out, if you’re lucky and you can use it it can be a good extra developer performance boost.

And when do I think it might not pay off?:

  • You have enough amount and good mobile native developers to develop the app for each platform.
  • You already have solid native apps for each platform, rewriting them to RN might not make sense unless you want to reuse web version business logic.


And lastly, let’s review the misconceptions I talked about at the beginning:

  • RN is a solution for pure developers who only want to know a bit about mobile native development.
    • FALSE: you need to understand those android and iOS folders, you need to be able to install RN libs including native code and you even might need to write your own native modules!
  • RN means learn ‘once’  and write anywhere.
    • Yes but… you need to learn as much as you can for each specific platform.
  • RN might not make sense if you are a proficient mobile native developer.
    • FALSE: productivity can be quite bigger in RN.
  • RN in the worst case can perform too badly to the point of having to rewrite everything to native when it’s too late.
    • FALSE: in the worst case you can migrate JS code to native code.
  • RN allows reusing tons of code among platform.
    • TRUE: In the app I built, the whole redux code is the same in RN and ReactJs and that’s where most of the complex business logic is. Besides code differences between Android and iOS are just some UI details here and there to improve the experience for each platform.
  • RN is fun.
    • TRUE!: See the minion “Fun” part 🙂

I hope this blog post can serve others introducing to this framework which is nice and powerful but there are a few things to consider or it’s easy to be disappointed.

A Tech Lead HOWTO

I’ve been working in a Tech Lead role position for something more than 2.5 years. Some notes I can write here that would have been useful for me and hopefully for someone reaching this page:

  1. Read this HN question comments:
  2. Create a roadmap, parallel to the team projects. Make sure you have a long-term plan in mind.Having-a-Social-Media-Game-Plan-Higher-Profits.jpg
  3. Your team needs to fulfill the projects maintaining a good working environment, otherwise, they’ll burn out soon. The opposite is true too, it does matter having the happiest team if projects don’t evolve as they should.Happy_minions.jpg
  4. Be patient, start just with little improvements as you understand your area. Avoid revolution feeling, people don’t like too many changes at the same time.
  5. Coding is probably not the most important thing you’ll do for your team. First months it will feel awkward but you’ll get used to it once you understand your responsibilities.
  6. Understand your area top-down, from architecture to code so you’ll probably won’t be an expert in every repo but you probably should be an expert of how everything glues together, how the architecture works and how it should evolve.
  7. 1:1s are one of the most important things you’ll do, make sure most of the 1:1 time is just informal, the projects sync part should be just the first 5 minutes. Try fixing calendar events for them.1-1s.png
  8. Overcommunicate: tell important things to the team as a group and repeat it again in each 1:1 and see if your message is being understood. Always ask for the opinion, especially in 1:1s, your mates will often help you to do things better.
  9. Be data driven, make sure you can see the state of everything with data, you should not need to do ad-hoc queries or launching dirty scripts to gather important health data. Your important telemetrics and KPIs should always be available to be reviewed. However, if you find yourself checking a metric a lot for certain thresholds, just create an alarm!Data-driven-business.jpg
  10. You’ll need to keep caring about your craftsmanship and it will be more difficult than ever as you won’t be coding most of the day. However, you should make sure you keep improving technically, not just growing in soft skills. Try maintaining pet-projects, courses and try to contribute to team code in small priority tasks from time to time.
  11. Be involved in code reviews and read every pull request if possible, that’s where you’ll feel how good work is evolving. However, avoid micro-manage, especially to senior mates, just be a helpful safety net if needed. Avoid commenting in CRs if you have nothing important to say, avoid the “here comes the boss comment” syndrome.03
  12. Detect any block and tackle it with most urgency. It’s one of your main responsibilities. Ask for blocks in every standup.
  13. Be the tech proxy for your team so that they can focus on their tasks. Be ready to be interrupted, learn how to optimize context changes. Actually, make sure your team understands you’re approachable and they can ping you if any problem without waiting, you’re there to be interrupted at any time if needed.
  14. Supervise estimations. Avoid being always dummy conservative, think carefully about risks and if any high risk then yes, be conservative with estimation. But if you’re estimating a task your team already did in the past, it should be pretty straightforward (and perhaps should be automated?)
  15. Try introducing and welcome changes to how things are done but be careful about when those changes are applied as if they impact a project, it can be difficult to justify. Related to this, avoid taking the easy route of just saying always “NO” to new things just to avoid risks, as your platform should evolve as part of your plan.main-qimg-fbef4b8b75dbb14bf216652c98ef2232
  16. Measure technical debt, buy it consciously and fight it as part of your plan with priorities.
  17. Improve your soft skills. You’ll be more time talking to people than to your IDE.
  18. Be always constructive with feedback, as a criticism without action points to follow is not a solution.
  19. Be the Toxic-comments goalkeeper. Try favoring a positive environment. However, don’t confuse toxic comment with a constructive critic.24293_1.jpg
  20. Tech Lead position is quite prone to workaholism. Go home! Read this post by Rafael López:


Good luck!

About Learning How to Learn course

I’ve just finished the Learning How to Learn course and I wanted to summarize some key ideas for myself and also encourage any reader to follow the course. If you’re a successful learner lots of the ideas will be familiar to you, no Coca-Cola formula disclosed but I think it will be valuable for you listening to them in a more elaborated way and you might also learn some new things. I really enjoyed this course.

  • Focused versus diffused mode of thinking.

You’ll learn how you need to work both modes of thinking, both are important and key for success.

If you just try to learn in the focused mode you might have trouble trying to work creative ideas, link ideas to others that would seem unrelated at first and creativity in general.

When you’re in the focused mode it’s like distractions do not exist, all your CPU is dedicated to a single task, it let us focus on the information we’re working on being very efficient processing it and memorizing what we need. However, often you need to see a bigger picture as big as you might need, you start focused on a problem that seems to have no solution and you leave your mind fly and start touching related ideas, like zooming out from the problem, that’s when sometimes “magic” happens and you understand something relevant to the problem that you weren’t able to see just being focused on the problem. This is the diffused mode.

This is what happens when you manage to find a solution to a problem after having a walk, after sleeping, while you’re moving your chair around instead of keep staring at the code… I’ve seen this often in cases where you are trying to fix a bug and you’re in that moment where you think “This cannot be happening, it’s impossible”. Then is when you go to the kitchen or start looking at the sky or go home and keep thinking about the problem at the gym, and at some point, you realize that there’s something you had missed that could actually be the problem and it is!! I think this is also related to the rubber duck debugging as it lets you change to diffuse mode thinking while you explain the problem to a mate.

More about focused vs diffuse mode.

  • Procrastination.

This a topic I’ve been interested in for some time. Some ideas I already knew are explained like trying to start with tasks you don’t like first so that you get an “energy boost”, trying to focus on process instead of product so that you enjoy with the routines and avoid thinking too much about long-term goal and trying techniques like Pomodoro.

It explains a nice analogy, saying that you do lots of things in zombie mode, and you usually don’t feel procrastinate doing them, you feel that you need to procrastinate when you know that you need some effort, even if it’s a task about something you want. Thinking about the product enforces procrastination. The good news as it’s said in the course is that once you start, the “pain” stops and you can try things like pomodoro in order to advance.


  • Learning

It’s explained how important practice is, the importance of testing yourself (like in an exam) and how productive it is versus just relearning the material. Actually, it talks about the risk of over-learning that you can avoid testing yourself soon so that you know whether you’re already ok with the subjects. Related to that there is an interesting concept called Einstellung.

It also explains the “interleaving technique”, something I already knew but never applied too well but might try to do better in the future: it’s about learning a subject and reviewing it like exponentially in time so that your brain saves the information in the “hard disk” instead of just “RAM” (or even L2 cache). As I’ve said I’ve typically studied a subject for some time and moved on to another instead of keeping reviewing the learned subject before the current one.

The course also talks about the importance of memorizing and practicing, understanding is not enough, it’s like the eureka moment is good but you need the practice to be able to work with the concepts and having there in your mind long term.

It also explains that sleeping is very important so that your brain can organize what you’ve learned and you can be in good shape the next day. It explains that there are some metabolic processes that block you’re learning and you need rest in order to control them. It also explains how important it is recalling for learning. This is something that my father did great, I remember how he asked me each lesson and that was key for learning, it was not that when being asked or being tested you’re only checking whether you’ve learned, you’re actually learning at that moment and enforcing consolidating concepts in your brain.

The course also explains the concept of “chunking“: you study some concept or idea, you understand it, practice and link it to other concepts, like a part of the puzzle. You can memorize a concept but it will be useless, it will be a variable that is GC collected as it’s linked to nothing. Those chunks are formed from small pieces to form bigger ones, and your diffuse mode of thinking can try to find new relations among all of them. That’s what experience is about when you’ve been working on lots of projects in different areas and you see how all of them in some way guide you to new decisions in next projects.

There are other interesting concepts shown related to learning like the “illusion of competence”: when you think you’ve learned something but you’ve just read a lot about it but you haven’t really learned it. They also talke about a related concept that is well known in our profession: The Impostor Syndrome.

It also mentions that some techniques like highlighting text can be worse than just writing some personal notes on the side of the book and writing a map of the ideas. E.g in you’re listening, instead of starting from the top of the paper, just start from the middle, write some ideas, link them…

Finally, it also mentions the importance of metaphors and how you replace some metaphors by better ones.

Practice, practice and practice:


  • Memorization

In a world where all the information is on your smartphone, do we really need to memorize things or we just need to understand?

It happens you can’t learn without some level of memorization, and It has been studied that being able to memorize important things is also positive for creativity. It’s also related to practice, you’ll memorize those things that are important as part of your practice. The course mentions some techniques to help memorizing like the Memory Palace , mnemonics, acronyms…


  • Tips for tests

It also explains some tips for tests. Apart from encouraging good sleep before exams it promotes checking first all the questions to see the whole picture and starting from a difficult exercise but jumping to an easier one if you’re blocked after one minute. This practice surprised me as I think I’ve always started doing the easy ones and then going for the trickiest ones but their explanation made sense as that way your brain is already working on the difficult ones at the same time you’re working on the easy ones, just encouraging your diffuse mode I guess…

  • Some interesting resources

You can explore more about this course contents in this mindmap by Rodolfo Mondion who has also written about this course:

The course is also very good pointing to valuable resources, e.g:







Analyzing FastestWebsiteEver

Checking was on my TODO list. Let’s see some details in Wireshark and review some TCP/IP details.

Captura de pantalla 2017-09-30 a las 3.19.38.png

You can check if you want to avoid setting up the service yourself.

So you open Wireshark and you start capturing. Visit and you can use a Wireshark filter for the resolved IP for

Captura de pantalla 2017-09-30 a las 2.11.30.png

The project states that it is “the greatest website to ever fit in a single TCP packet”.

Is it true? Let’see what it needs: I can see 9 packets and some details.

First 3 packets (handshake)

Captura de pantalla 2017-09-30 a las 3.03.49.png

Like any normal TCP connection we start with a 3-way handshake:

  • First, my laptop sends a SYN packet with an Initial Sequence Number which can be seen 0 in Wireshark but that’s actually a relative over a random. This is my laptop requesting proof that its message can get through.

Captura de pantalla 2017-09-30 a las 2.28.14.png

  • The server needs to send an ACK packet (to prove SYN received) and its own SYN (to prove it can reach the client). We can see it’s actually done in the same sent packet: SYN-ACK

Captura de pantalla 2017-09-30 a las 2.30.49.png

  • My laptop receives the SYN-ACK: as its an ACK it knows it can send a packet to the server and as its a SYN it knows that the server needs an ACK, so it sends such ACK.

Captura de pantalla 2017-09-30 a las 2.35.54

  • Once the server receives that ACK handshake has finished and the channel is considered feasible. However it might have been like, but this is the minimum to start trying to communicate.

Packet #4 (GET)

We’ve finished with the 3 first packets. The fourth one is the GET request.

Captura de pantalla 2017-09-30 a las 2.41.10

Captura de pantalla 2017-09-30 a las 2.48.50.png

Notice that Push flag is enabled (PSH) and also ACK as with any exchanged packet during the communication.

In HyperText Transport Protocol we can see the sent HTTP request:

Captura de pantalla 2017-09-30 a las 2.52.48

#5 and #6 packets

Captura de pantalla 2017-09-30 a las 3.04.41.png

Packet #5 is an ACK:

Captura de pantalla 2017-09-30 a las 3.00.10

Next packet is the HTTP GET response:

Captura de pantalla 2017-09-30 a las 3.02.55.png

I guess this is what the project describes as “send response immediately after TCP session init”

I think PSH is enabled because Nagle’s algorithm is disabled as the project describes too.

In Hypertext Transfer Protocol section we can see DEFLATE compression is being used, again exactly as described in README.

Captura de pantalla 2017-09-30 a las 3.13.12.png

See that Response is “200 k” instead of “200 OK”: 1 byte saved there.

Content encoded is 1163 bytes (1547 bytes decoded), far from needing fragmentation:

The maximum would be 1460 bytes for content and 40 bytes for IP and TCP headers.

In this case, the frame is 1292 bytes, TCP segment length is 1226 bytes and HTTP Content-Length is 1163 bytes, in detail:


  • Frame header (14 bytes): 7 bytes for preamble, 1 byte for SFD, 12 bytes for source and destination MACs and 2 bytes for packet type (IP) –> 22 bytes (14 ignoring preamble and SFD)

Captura de pantalla 2017-09-30 a las 9.49.08.png

  • IP header (20 bytes): 1 byte for ip.version (4), 1 byte for ip.dsfield, 2 bytes for ip.len (Length), 2 bytes for (ID), 1 byte for ip.flags (0x02, ip.flags.df (Don’t Fragment  )is set), 2 bytes for ip.frag_offset (Fragment Offset), 1 byte for ip.ttl (TTL), 1 byte for ip.proto (Protocol), 2 bytes for ip.checksum (Header checksum) and 8 bytes for ip.src and ip.dst.


  • TCP header (32 bytes): 4 bytes for tcp.srcport and tcp.dstport, 4 bytes for tcp.seq (Sequence Number), 4 bytes for tcp.ack (Ack Number), 2 bytes for tcp.flags, 2 bytes for tcp.window_size_value, 2 bytes for tcp.checksum, 2 bytes for tcp.urgent_pointer, 12 bytes for tcp.options

Captura de pantalla 2017-09-30 a las 10.17.44.png

  • HTTP (1226 bytes): 15 bytes for “HTTP/1.1 200 k\n”, 21 bytes for http.content_length_header, 26 bytes for http.content_encoding_header, 1 byte of “\n” and 1163 bytes for Content (encoded)

About the HTTP response content, it’s easier seeing the source code in the browser, there you’ll see:

Captura de pantalla 2017-09-30 a las 3.16.04.png


Last 3 packets

Finally last 3 packets. Server resets the connection with RST packet. I guess they could use FIN but RST is quicker. More about FIN vs RST.

Captura de pantalla 2017-09-30 a las 3.21.41.png

That’s all 🙂



An easy Vim IDE setup: using Vundle

Are you bored of messing up with your own custom .vimrc which is eventually broken or difficult to maintain?. Well, there are lots of projects in order to have a vim plugin environment well setup, I’ll just show one of them. I’m following the steps you can see in install instructions:

  1. First, let’s get Vundle:
mv ~/.vim ~/.vim.beforeVundleBackup
mv ~/.vimrc ~/.vimrc.beforeVundleBackup
mkdir ~/.vim && mkdir ~/.vim/bundle/
git clone ~/.vim/bundle/Vundle.vim

With that, you already have the bare minimum to work with Vundle.

       2. Follow this to get a proper font:

If your using iTerm2 basically you need to download the file and load preset from the iterm2-color-solarized folder.

Captura de pantalla 2017-05-27 a las 8.42.20.png

Import both: “Solarized Dark” and “Solarized Light” and you can decide later which one is more convenient.

In iTerm2 make sure this is your terminal type (more info):

Captura de pantalla 2017-05-27 a las 9.07.01.png

     3. Now it’s time to get the vimrc:

wget && mv vimrc.vim ~/.vimrc

     4. Now runvim +PluginInstall +qall

And that’s all, now your Vim should look like this:

Captura de pantalla 2017-05-27 a las 9.10.37

Now you already have a pretty good vim setup and you’ll be able to install new plugins easier.

e.g I can install a SML plugin just adding this to the .vimrc:

Just remember to run this every time you run a plugin:

vim +PluginInstall +qall