It s an application where raw CPU power is rarely a limiting factor and the problems are the amount of data, the complexity of data, and the speed at which it changes. It is built from standard building blocks that provide commonly needed functionality.
In this chapter, we see the fundamentals of what we are trying to achieve.
The things that can go wrong are called faults, and systems that anticipate faults and can cope with them are called fault-tolerant or resilient.
Reliability means making systems work correctly, even when faults occur. Faults can be in hardware, software, and humans. Fault-tolerance techniques can hide certain types of faults from the end user.
The system should continue to work correctly even in the face of adversity: hardware or software faults, and even human error!
A fault is usually defined as one component of the system deviating from its spec, whereas a failure is when the system as a whole stops providing the required service to the user. It is impossible to reduce the probability of a fault to zero; therefore it is usually best to design fault-tolerance mechanisms that prevent faults from causing failures.
Scalability means having strategies for keeping performance good, even when load increases.
In a scalable system, you can add processing capacity in order to remain reliable under high load.
Scalability is the term we use to describe a system’s ability to cope with increased load.
Load can be described with a few numbers which we call load parameters. The best choice of parameters depends on the architecture of your system: it may be requests per second to a web server, the ratio of reads to writes in a database, the number of simultaneously active users in a chat room, the hit rate on a cache, or something else. Perhaps the average case is what matters for you, or perhaps your bottleneck is dominated by a small number of extreme cases.
Latency and response time are not the same thing. The response time is what the client sees: besides the actual time to process the request (the service time), it includes network delays and queueing delays. Latency is the duration that a request is waiting to be handled during which it is latent, awaiting service
It’s common to see the average response time of a service reported. However, the mean is not a very good metric if you want to know your “typical” response time, because it doesn’t tell you how many users actually experienced that delay. Usually it is better to use percentiles.
This makes the median (p50) a good metric if you want to know how long users typically have to wait: half of user requests are served in less than the median response time, and the other half take longer.
High percentiles of response times, also known as tail latencies, are important as they directly affect users’ experience of the service. Customers with the slowest requests are often those who have the most data on their accounts because they have made many purchases. On the other hand, optimizing the p99.99 can be too expensive and to not yield enough benefit.
it only takes a small number of slow requests to hold up the processing of subsequent requests an effect sometimes known as head-of-line blocking.
Scaling up (vertical scaling, moving to a more powerful machine) and scaling out (horizontal scaling, distributing the load across multiple smaller machines).
Distributing load across multiple machines is also known as a shared-nothing architecture. A system that can run on a single machine is often simpler, but high-end machines can become very expensive, so very intensive workloads often can’t avoid scaling out. In reality, good architectures usually involve a pragmatic mixture of approaches: for example, using several fairly powerful machines can still be simpler and cheaper than a large number of small virtual machines.
Some systems are elastic, meaning that they can automatically add computing resources when they detect a load increase, whereas other systems are scaled manually (a human analyzes the capacity and decides to add more machines to the system).
While distributing stateless services across multiple machines is fairly straightforward, taking stateful data systems from a single node to a distributed setup can introduce a lot of additional complexity. For this reason, common wisdom until recently was to keep your database on a single node (scale up) until scaling cost or high-availability requirements forced you to make it distributed.
An architecture that scales well for a particular application is built around assumptions of which operations will be common and which will be rare.
Maintainability is making life better for the engineering and operations teams who need to work with the system. Good abstractions can help reduce complexity and make the system easier to modify and adapt for new use cases. Good operability means having good visibility into the system’s health, and having effective ways of managing it.
It is well known that the majority of the cost of software is not in its initial development, but in its ongoing maintenance.
But finally, I just did it manually zooming out from web client. It happens that Google just offers consents that allow you to manipulate photos and albums created with your own app, so you can’t move around photos between albums created by the official too. This means you cannot organize your library automatically unless you just need to work with photos you would upload with your own app…
2019 was an awesome year for me, mainly because I became father 🤗 but I also found time to keep my learning habit 🤓, something very important after 15 years since my first job in the field. So I’d like to list and elaborate on the Coursera courses I did and why:
Conflict Resolution Skills (cert): a good introduction, something essential even if you’re in an individual contributor position but critical in management.
Kotlin for Java developers (cert): a great course in order to jump from Java to Kotlin. We’ve been increasingly using Kotlin at work (even for microservices!) so I found it was a good way to review the language in general.
It turns out I already have booked a few events for 2019 so I wanted to have a yearly view of everything I have. I was disappointed to see that current Google Calendar yearly events view is useless as It’s just empty. There are lots of comments about this issue in this Google productforums entry.
So I did some search to look for solutions and I found these 2:
A bit ugly but it works and it’s open-source so this is what I’m using. You can use it without installing it here (you just need to OAUTH to your gmail account). Only drawback is that it does not display names for multi-day events, as a workaround you can create a single event for the first day e.g “Flight to London”
Let me know if you have better alternatives.
I also hope Google implements this.. Hello Google PMs? 🙂
So in this situation, React Native (“RN” from now on) seemed like the way to go as we wanted to have a working prototype in a month and it should be maintained in both Android and iOS without extra resources.
Some misconceptions(?) about RN that I’ll talk about:
RN is a solution for pure developers who only want to know a bit about mobile native development. (FALSE)
RN means learn ‘once’ and write anywhere (yes but…)
RN might not make sense if you are a proficient mobile native developer (FALSE)
RN in the worst case can perform too badly to the point of having to rewrite everything to native when it’s too late. (FALSE)
RN allows to reuse tons of code among platform (TRUE)
RN is fun (TRUE!)
Let’s divide and conquer the discussion:
react-native run-platform little disappointment
I’ll avoid talking about Expo as I cannot rely on it because I use a few native libraries that require the app to be ejected. So, the first experience creating a RN app is pretty cool, you just do react native init MyNewFancyAppand there you have a hello world which you can run with react-native run-PLATFORM
Do both react-native run-android / run-ios work as you would expect? Well, more-less:
react-native run-ios compiles your iOS project, including native dependencies, links the project and runs Metro so that the JS code is bundled and offered by a Node server running on your laptop. Finally it runs your app in a Simulator target.
react-native run-android compiles your Android app with Gradle and also runs Metro the same way but there’s a difference: it won’t start an emulator or simulator, it will try to run adb install against whatever is attached to your laptop: an emulator running in Genymotion, a real device connected to USB…
This is the first little difference you find between using RN against each platform and this is just about tooling. react-native run-PLATFORM has different behaviour on Android and iOS. Actually, if you want to run your RN app in your device react-native run-android will work fine as “adb install”works both for emulator and devices the same way whereas react-native run-ios is only enough for running on the simulator. You’ll need to run Xcode to run it in your device. You might think “not a big deal” and you’re right but it turns out RN is full of this kind of details that could be improved and make dev experience better.
A RN app is a native app so you’ll need to learn mobile native stuff
You’ll see react-native init basically creates a Gradle Java android project under /android folde, a /iOS project under ios folder and a /index.js to start the RN app.
You might think you won’t need to care about those autogenerated native projects and just about the JS and you’ll be wrong.
Every time you link a npm RN library containing native code (with react-native link) you’ll see how it patches your projects, and sometimes it won’t be enough and you’ll need to do some manual changes to integrate such libraries correctly. This means you cannot be a 0 native knowledge developer to use RN, you’ll need to get your hands on those native projects from time to time. Don’t cry about it, embrace it as this will rarely change as RN is basically about patching those native projects that you scaffold initially over and over again and you’d better check what changes you introduce “automatically” when you link new libraries. It’s a good practice trying to understand the generated patch every time you run react-native link .
Does this mean you need to know both Java and Objective-C to be as productive RN developer? IMHO Yes.
You can also learn Swift but you’ll see that you need to learn Objective-C anyway so that should be your priority as most of RN libs are written in Objective-C and your /ios RN generated project is in Objective-C.
I can mention that you probably need to change AppDelegate.m when using certain libraries, and those libraries typically have some code you can copy-paste there but they don’t tell you how to mix different libraries there correctly, that’s something you need to do with care and at least some Objective-C understanding. E.g some libs tells you to add a continueUserActivity method in AppDelegate.m which can collide with other libraries you had and you might need to figure out yourself.
At some point, you might even need to create a native module yourself. RN documentation documents twoexamples about how to do it so you can see that the vision of RN is not about avoiding any native development, it’s more about boosting productivity and sharing business logic and practices among very different platforms.
Debug mode and runtimes
Usually, when you’re developing with RN you will be using DEBUG mode. You’ll soon discover that running your app that way means that your JS code is being run by your laptop browser which is important to have in mind as in Android especially you can hit important differences that will make you introduce extra polyfills. E.g Symbol._iterator issue.
My advice for this is clear: test your JS code often in your device/simulator, don’t live in the browser runtime all the time until a bad surprise comes too late when you thought your code was already ready. You might think, Tests!: well, that kind of problems will be detected if your running those tests with the device runtime, otherwise won’t be detected either 😦
In practice, I’m usually developing with both Chrome and an iOS device runtime and I check from time to time on Android and It’s common finding a surprise to fix. It also depends on which is your prioritized platform, in my case I focus on iOS first and I care about nailing Android with less priority. You cannot assume things just will work in every platform out of the box.
Testing in native platforms is quite mature so I was not sure about whether I would be comfortable with RN in that regard. Luckily it’s pretty good, it takes some time to get used to the way dependency injection is done (basically ES6 imports and props) but Jest testing framework rocks and storybooks + snapshots is pretty cool. Besides, as Redux is the de facto standard to develop RN apps that’s a part which is testable by default as it enforces using pure functions. Perhaps I miss that in RN world is common seeing native libraries with no test coverage at all, my recommendation is checking the project README and the issues first as a project that seems great at first glance might be better to avoid for not being maintained or being buggy. You’ll find there are lots of react-native-foo libs that expose “foo” native APIs but the quality among them is different and there are different libraries doing the same thing with different level of quality.
Organizing your code
About organizing your code, don’t assume you should do it the way you see in basic examples, consider them just a starting point to having something working but not something maintainable in long-term. E.g avoid adding all your business logic directly to your components, favour pure functions (using Redux is a must), extract redux part from component to containers, find a consistent way to organize your styles instead of just having them spread in your components and in general apply any practice you know from other good languages/frameworks even if you rarely see them in common RN examples.
One more thing: sometimes you’ll find you need different code depending on platforms, think carefully about how to do it as the typical “if(platform == ‘ios’)” can be a first working approach but you should try to wrap those blocks in generic classes or components which deal with the differences internally so that you don’t have lots of platform branching in your main business logic.
Preparing release/alpha RN builds is as complex or even a bit more than preparing classic native ones as you also need to care about the JS bundle. Luckily you can automate with fastlane, something you should really do to avoid the pain of preparing them manually. Besides you can consider using Codepush so that you can avoid preparing new builds unless you’ve really changed native dependencies.
This was one of my main doubts about RN:
Performance issues are quite easy to detect thanks to libraries like slowlog and snoppy. With slowlog you can monitor whether components take too much time to mount and with snoppy you can watch whether you’re having excessive activity in the RN bridge.
RN bridge is an important thing to understand as RN developer. Magic does not exist, in RN either. Your JS code runs in a background thread and your native modules run on their own background threads too so how does your JS code communicate with your native modules?
Well, basically the message object you want to send to the native API that will send the message is serialized so that it can be communicated to the other thread, and then it’s unserialized in the native thread side. That’s basically the bridge and it has a cost. If you are doing too many communication from JS to Native or vice-versa that will degrade your app, those calls are actually batched but It can become a problem anyway. That’s a bottleneck as other code that needs the bridge to run will need to wait.
How can you take care about this? As I said you can use a tool like snoppy to monitor it. If you rarely see high peaks in such bridge, congratulations! You’re achieving native-like performance in your react-native app. Otherwise, you’ll need to do something about it. What? Well, if those peaks are quite rare you can try to improve the UX when they happen… whereas if they really affect the user experience or battery you might need to port some JS code to a native module because that’s something to take into account.
Pure JS libs vs RN native libs
A kind of magic
It can be interesting checking the RN repo to see the magic under the hood.
So why RN can be more enjoyable for a native developer?
Basically because you rarely need to recompile your app unless you change the native dependencies, something that you’re not doing N times a day and not even N times a week probably whereas you’re probably changing templates N times a day, or business logic, or assets, or colours… And those things in RN are in JS and you can just refresh and see the changes almost instantly! This makes RN development cycle way more similar to web development as the feedback cycle is way-way faster and you can be way more time in the zone having fun as a happy developer as you don’t need to be multitasking to find other little things to do at the same time your app is being rebuilt, even if your app is small and just takes like 30 seconds to rebuild that’s still way more than what JS code takes to be loaded again. You can even enable hot reloading! and you’ll see how your app changes as you change JS lines.
So, when do I think react-native is a good idea as of today? IMHO it worths seriously considering if:
You need to port a web app written in ReactJS+Redux to Android/IOS which is quite complex and just runs fine.
You want to prototype a native app fast. I avoided talking about Expo but check it out, if you’re lucky and you can use it it can be a good extra developer performance boost.
And when do I think it might not pay off?:
You have enough amount and good mobile native developers to develop the app for each platform.
You already have solid native apps for each platform, rewriting them to RN might not make sense unless you want to reuse web version business logic.
And lastly, let’s review the misconceptions I talked about at the beginning:
RN is a solution for pure developers who only want to know a bit about mobile native development.
FALSE: you need to understand those android and iOS folders, you need to be able to install RN libs including native code and you even might need to write your own native modules!
RN means learn ‘once’ and write anywhere.
Yes but… you need to learn as much as you can for each specific platform.
RN might not make sense if you are a proficient mobile native developer.
FALSE: productivity can be quite bigger in RN.
RN in the worst case can perform too badly to the point of having to rewrite everything to native when it’s too late.
FALSE: in the worst case you can migrate JS code to native code.
RN allows reusing tons of code among platform.
TRUE: In the app I built, the whole redux code is the same in RN and ReactJs and that’s where most of the complex business logic is. Besides code differences between Android and iOS are just some UI details here and there to improve the experience for each platform.
RN is fun.
TRUE!: See the minion “Fun” part 🙂
I hope this blog post can serve others introducing to this framework which is nice and powerful but there are a few things to consider or it’s easy to be disappointed.
Create a roadmap, parallel to the team projects. Make sure you have a long-term plan in mind.
Your team needs to fulfill the projects maintaining a good working environment, otherwise, they’ll burn out soon. The opposite is true too, it does matter having the happiest team if projects don’t evolve as they should.
Be patient, start just with little improvements as you understand your area. Avoid revolution feeling, people don’t like too many changes at the same time.
Coding is probably not the most important thing you’ll do for your team. First months it will feel awkward but you’ll get used to it once you understand your responsibilities.
Understand your area top-down, from architecture to code so you’ll probably won’t be an expert in every repo but you probably should be an expert of how everything glues together, how the architecture works and how it should evolve.
1:1s are one of the most important things you’ll do, make sure most of the 1:1 time is just informal, the projects sync part should be just the first 5 minutes. Try fixing calendar events for them.
Overcommunicate: tell important things to the team as a group and repeat it again in each 1:1 and see if your message is being understood. Always ask for the opinion, especially in 1:1s, your mates will often help you to do things better.
Be data driven, make sure you can see the state of everything with data, you should not need to do ad-hoc queries or launching dirty scripts to gather important health data. Your important telemetrics and KPIs should always be available to be reviewed. However, if you find yourself checking a metric a lot for certain thresholds, just create an alarm!
You’ll need to keep caring about your craftsmanship and it will be more difficult than ever as you won’t be coding most of the day. However, you should make sure you keep improving technically, not just growing in soft skills. Try maintaining pet-projects, courses and try to contribute to team code in small priority tasks from time to time.
Be involved in code reviews and read every pull request if possible, that’s where you’ll feel how good work is evolving. However, avoid micro-manage, especially to senior mates, just be a helpful safety net if needed. Avoid commenting in CRs if you have nothing important to say, avoid the “here comes the boss comment” syndrome.
Detect any block and tackle it with most urgency. It’s one of your main responsibilities. Ask for blocks in every standup.
Be the tech proxy for your team so that they can focus on their tasks. Be ready to be interrupted, learn how to optimize context changes. Actually, make sure your team understands you’re approachable and they can ping you if any problem without waiting, you’re there to be interrupted at any time if needed.
Supervise estimations. Avoid being always dummy conservative, think carefully about risks and if any high risk then yes, be conservative with estimation. But if you’re estimating a task your team already did in the past, it should be pretty straightforward (and perhaps should be automated?)
Try introducing and welcome changes to how things are done but be careful about when those changes are applied as if they impact a project, it can be difficult to justify. Related to this, avoid taking the easy route of just saying always “NO” to new things just to avoid risks, as your platform should evolve as part of your plan.
Measure technical debt, buy it consciously and fight it as part of your plan with priorities.
Improve your soft skills. You’ll be more time talking to people than to your IDE.
Be always constructive with feedback, as a criticism without action points to follow is not a solution.
Be the Toxic-comments goalkeeper. Try favoring a positive environment. However, don’t confuse toxic comment with a constructive critic.
I’ve just finished the Learning How to Learn course and I wanted to summarize some key ideas for myself and also encourage any reader to follow the course. If you’re a successful learner lots of the ideas will be familiar to you, no Coca-Cola formula disclosed but I think it will be valuable for you listening to them in a more elaborated way and you might also learn some new things. I really enjoyed this course.
Focused versus diffused mode of thinking.
You’ll learn how you need to work both modes of thinking, both are important and key for success.
If you just try to learn in the focused mode you might have trouble trying to work creative ideas, link ideas to others that would seem unrelated at first and creativity in general.
When you’re in the focused mode it’s like distractions do not exist, all your CPU is dedicated to a single task, it let us focus on the information we’re working on being very efficient processing it and memorizing what we need. However, often you need to see a bigger picture as big as you might need, you start focused on a problem that seems to have no solution and you leave your mind fly and start touching related ideas, like zooming out from the problem, that’s when sometimes “magic” happens and you understand something relevant to the problem that you weren’t able to see just being focused on the problem. This is the diffused mode.
This is what happens when you manage to find a solution to a problem after having a walk, after sleeping, while you’re moving your chair around instead of keep staring at the code… I’ve seen this often in cases where you are trying to fix a bug and you’re in that moment where you think “This cannot be happening, it’s impossible”. Then is when you go to the kitchen or start looking at the sky or go home and keep thinking about the problem at the gym, and at some point, you realize that there’s something you had missed that could actually be the problem and it is!! I think this is also related to the rubber duck debugging as it lets you change to diffuse mode thinking while you explain the problem to a mate.
This a topic I’ve been interested in for some time. Some ideas I already knew are explained like trying to start with tasks you don’t like first so that you get an “energy boost”, trying to focus on process instead of product so that you enjoy with the routines and avoid thinking too much about long-term goal and trying techniques like Pomodoro.
It explains a nice analogy, saying that you do lots of things in zombie mode, and you usually don’t feel procrastinate doing them, you feel that you need to procrastinate when you know that you need some effort, even if it’s a task about something you want. Thinking about the product enforces procrastination. The good news as it’s said in the course is that once you start, the “pain” stops and you can try things like pomodoro in order to advance.
It’s explained how important practice is, the importance of testing yourself (like in an exam) and how productive it is versus just relearning the material. Actually, it talks about the risk of over-learning that you can avoid testing yourself soon so that you know whether you’re already ok with the subjects. Related to that there is an interesting concept called Einstellung.
It also explains the “interleaving technique”, something I already knew but never applied too well but might try to do better in the future: it’s about learning a subject and reviewing it like exponentially in time so that your brain saves the information in the “hard disk” instead of just “RAM” (or even L2 cache). As I’ve said I’ve typically studied a subject for some time and moved on to another instead of keeping reviewing the learned subject before the current one.
The course also talks about the importance of memorizing and practicing, understanding is not enough, it’s like the eureka moment is good but you need the practice to be able to work with the concepts and having there in your mind long term.
It also explains that sleeping is very important so that your brain can organize what you’ve learned and you can be in good shape the next day. It explains that there are some metabolic processes that block you’re learning and you need rest in order to control them. It also explains how important it is recalling for learning. This is something that my father did great, I remember how he asked me each lesson and that was key for learning, it was not that when being asked or being tested you’re only checking whether you’ve learned, you’re actually learning at that moment and enforcing consolidating concepts in your brain.
The course also explains the concept of “chunking“: you study some concept or idea, you understand it, practice and link it to other concepts, like a part of the puzzle. You can memorize a concept but it will be useless, it will be a variable that is GC collected as it’s linked to nothing. Those chunks are formed from small pieces to form bigger ones, and your diffuse mode of thinking can try to find new relations among all of them. That’s what experience is about when you’ve been working on lots of projects in different areas and you see how all of them in some way guide you to new decisions in next projects.
There are other interesting concepts shown related to learning like the “illusion of competence”: when you think you’ve learned something but you’ve just read a lot about it but you haven’t really learned it. They also talke about a related concept that is well known in our profession: The Impostor Syndrome.
It also mentions that some techniques like highlighting text can be worse than just writing some personal notes on the side of the book and writing a map of the ideas. E.g in you’re listening, instead of starting from the top of the paper, just start from the middle, write some ideas, link them…
Finally, it also mentions the importance of metaphors and how you replace some metaphors by better ones.
Practice, practice and practice:
In a world where all the information is on your smartphone, do we really need to memorize things or we just need to understand?
It happens you can’t learn without some level of memorization, and It has been studied that being able to memorize important things is also positive for creativity. It’s also related to practice, you’ll memorize those things that are important as part of your practice. The course mentions some techniques to help memorizing like the Memory Palace , mnemonics, acronyms…
Tips for tests
It also explains some tips for tests. Apart from encouraging good sleep before exams it promotes checking first all the questions to see the whole picture and starting from a difficult exercise but jumping to an easier one if you’re blocked after one minute. This practice surprised me as I think I’ve always started doing the easy ones and then going for the trickiest ones but their explanation made sense as that way your brain is already working on the difficult ones at the same time you’re working on the easy ones, just encouraging your diffuse mode I guess…
So you open Wireshark and you start capturing. Visit http://packet.city and you can use a Wireshark filter for the resolved IP for packet.city.
The project states that it is “the greatest website to ever fit in a single TCP packet”.
Is it true? Let’see what it needs: I can see 9 packets and some details.
First 3 packets (handshake)
Like any normal TCP connection we start with a 3-way handshake:
First, my laptop sends a SYN packet with an Initial Sequence Number which can be seen 0 in Wireshark but that’s actually a relative over a random. This is my laptop requesting proof that its message can get through.
The server needs to send an ACK packet (to prove SYN received) and its own SYN (to prove it can reach the client). We can see it’s actually done in the same sent packet: SYN-ACK
My laptop receives the SYN-ACK: as its an ACK it knows it can send a packet to the server and as its a SYN it knows that the server needs an ACK, so it sends such ACK.
Once the server receives that ACK handshake has finished and the channel is considered feasible. However it might have been like, but this is the minimum to start trying to communicate.
Packet #4 (GET)
We’ve finished with the 3 first packets. The fourth one is the GET request.
Notice that Push flag is enabled (PSH) and also ACK as with any exchanged packet during the communication.
In HyperText Transport Protocol we can see the sent HTTP request:
#5 and #6 packets
Packet #5 is an ACK:
Next packet is the HTTP GET response:
I guess this is what the project describes as “send response immediately after TCP session init”
I think PSH is enabled because Nagle’s algorithm is disabled as the project describes too.
In Hypertext Transfer Protocol section we can see DEFLATE compression is being used, again exactly as described in README.
See that Response is “200 k” instead of “200 OK”: 1 byte saved there.
The maximum would be 1460 bytes for content and 40 bytes for IP and TCP headers.
In this case, the frame is 1292 bytes, TCP segment length is 1226 bytes and HTTP Content-Length is 1163 bytes, in detail:
Frame header (14 bytes): 7 bytes for preamble, 1 byte for SFD, 12 bytes for source and destination MACs and 2 bytes for packet type (IP) –> 22 bytes (14 ignoring preamble and SFD)
IP header (20 bytes): 1 byte for ip.version (4), 1 byte for ip.dsfield, 2 bytes for ip.len (Length), 2 bytes for ip.id (ID), 1 byte for ip.flags (0x02, ip.flags.df (Don’t Fragment )is set), 2 bytes for ip.frag_offset (Fragment Offset), 1 byte for ip.ttl (TTL), 1 byte for ip.proto (Protocol), 2 bytes for ip.checksum (Header checksum) and 8 bytes for ip.src and ip.dst.
TCP header (32 bytes): 4 bytes for tcp.srcport and tcp.dstport, 4 bytes for tcp.seq (Sequence Number), 4 bytes for tcp.ack (Ack Number), 2 bytes for tcp.flags, 2 bytes for tcp.window_size_value,2 bytes for tcp.checksum, 2 bytes for tcp.urgent_pointer, 12 bytes for tcp.options
HTTP (1226 bytes): 15 bytes for “HTTP/1.1 200 k\n”, 21 bytes for http.content_length_header, 26 bytes for http.content_encoding_header, 1 byte of “\n” and 1163 bytes for Content (encoded)
About the HTTP response content, it’s easier seeing the source code in the browser, there you’ll see:
Last 3 packets
Finally last 3 packets. Server resets the connection with RST packet. I guess they could use FIN but RST is quicker. More about FIN vs RST.
Are you bored of messing up with your own custom .vimrc which is eventually broken or difficult to maintain?. Well, there are lots of projects in order to have a vim plugin environment well setup, I’ll just show one of them. I’m following the steps you can see in https://github.com/jez/vim-as-an-ide install instructions: