Portable Network Graphics (PNG) is a popular raster graphics file format known for its lossless compression and wide support across various platforms and applications. In this blog post, we’ll delve into how PNG works, its format structure with a focus on headers and chunks, and how Draw.io leverages these features to embed drawing code within PNG files.
Related video from my Youtube channel:
The PNG Format
PNG was developed to replace the older Graphics Interchange Format (GIF). It offers several advantages, including better compression and support for a wider range of colors and transparency levels. Unlike JPEG, which is a lossy format, PNG preserves the original image quality, making it ideal for images that require precise details, such as text, graphics, and illustrations.
Structure of a PNG File
A PNG file is composed of a series of chunks. Each chunk has a specific function and structure, allowing for flexible and efficient image data storage. Here’s a breakdown of the core components of a PNG file:
PNG Signature: The file starts with an 8-byte signature that identifies the file as a PNG image. This signature is essential for programs to recognize and process the file correctly.
Chunks: Following the signature, the file consists of multiple chunks. Each chunk has four main parts:
Length (4 bytes): The length of the data field.
Chunk Type (4 bytes): A four-letter ASCII code specifies the chunk type.
Chunk Data (variable length): The data contained in the chunk.
CRC (4 bytes): A cyclic redundancy check value for error-checking.
There are several critical chunks, including:
IHDR (Image Header): Contains basic information about the image, such as width, height, bit depth, color type, compression method, filter method, and interlace method.
PLTE (Palette): Defines the color palette used if the image is paletted.
IDAT (Image Data): Contains the actual image data, compressed using the zlib algorithm.
IEND (Image End): Marks the end of the PNG file.
Additional chunks can store metadata, text information, and other data, enabling extended functionalities.
How Draw.io Embeds Code in PNG Files
Draw.io is an online diagramming tool that allows users to create a wide range of diagrams, from flowcharts to network diagrams. One of its unique features is the ability to embed the diagram’s XML code directly within a PNG file. This makes it easy to share and store diagrams without needing separate files for the image and the underlying code.
Here’s how Draw.io achieves this:
Embedding XML in a PNG: Draw.io takes advantage of PNG’s chunk-based structure by adding a custom chunk that contains the diagram’s XML data. This chunk is typically labeled zTXt or tEXt to indicate compressed or uncompressed textual data, respectively.
Custom Chunk Integration: When a user saves a diagram as a PNG in Draw.io, the application generates the diagram’s XML representation and compresses it if necessary. This XML data is then inserted into a custom chunk within the PNG file.
Reading Embedded Data: When the PNG file is opened in Draw.io, the application scans the chunks, identifies the custom chunk containing the XML data, extracts it, and reconstructs the diagram based on the embedded code.
This seamless integration allows users to benefit from the portability and compatibility of the PNG format while maintaining the ability to edit and update the diagrams within Draw.io.
Conclusion
PNG is a versatile and powerful image format, and its chunk-based structure offers extensive flexibility for embedding additional data. Draw.io leverages this feature to embed the diagram’s XML code directly within PNG files, making it convenient for users to share and edit diagrams without losing any information. Understanding the inner workings of PNG and its structure not only enhances our appreciation for this format but also opens up possibilities for creative and innovative uses in various applications.
In the ever-evolving landscape of web development and content management, WordPress stands as a steadfast titan, empowering millions of websites with its user-friendly interface and robust features. However, deploying WordPress can sometimes be a challenging task, especially for those new to server management and configuration. Fortunately, with the advent of containerization and orchestration technologies like Kubernetes, deploying WordPress has become more streamlined and efficient than ever before. One such method is leveraging the Bitnami Helm Chart, offering a seamless solution for deploying WordPress on Kubernetes clusters. In this blog post, we’ll explore the process of deploying WordPress using the Bitnami Helm Chart, highlighting its simplicity and effectiveness.
What is Bitnami?
Before delving into the deployment process, let’s take a moment to understand Bitnami. Bitnami is a well-known name in the world of application packaging and deployment automation. They offer a vast library of pre-configured software packages, including popular applications like WordPress, Drupal, Joomla, and many others. These packages are designed to be easily deployable across various platforms, making it convenient for developers and administrators to set up complex applications with minimal effort.
Their WordPress chart is the most active and downloaded amount the ones listed in artifacthub.io
Introducing Helm and Kubernetes
Helm is a package manager for Kubernetes that simplifies the process of deploying, managing, and upgrading applications. It uses charts, which are packages of pre-configured Kubernetes resources, to define the structure of an application. Kubernetes, on the other hand, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications.
Deploying WordPress with Bitnami Helm Chart
Now, let’s walk through the steps of deploying WordPress using the Bitnami Helm Chart:
Setup Kubernetes Cluster: Before deploying WordPress, you’ll need to have a Kubernetes cluster up and running. This can be a local cluster using tools like Minikube or a cloud-based solution like Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or Microsoft Azure Kubernetes Service (AKS).
Install Helm: Install Helm on your local machine or wherever you’ll be running the Helm commands. Helm provides a command-line interface (CLI) for managing charts and releases.
Add Bitnami Repository:
Add the Bitnami Helm repository to Helm by running the following command:
Customize Values (Optional): Optionally, you can customize the values in the values.yaml file to configure aspects of the WordPress deployment, such as resource limits, database credentials, and ingress settings. Make sure you have read their great README to understand the different options you have.
Deploy WordPress:
Finally, deploy WordPress using the Bitnami WordPress Helm Chart with the following command: helm install my-wordpress bitnami/wordpress
Access WordPress: Once the deployment is complete, you can access your WordPress site by retrieving the external IP address or domain associated with the WordPress service. Simply navigate to that address in your web browser, and you should see the WordPress installation wizard, allowing you to set up your site.
Hint: if you enabled ingress, you can always describe the ingress resource to see how to reach it. Otherwise you need to describe the SVC.
Benefits of Using Bitnami Helm Chart for WordPress
Deploying WordPress with the Bitnami Helm Chart offers several advantages:
Simplified Deployment: The Helm Chart abstracts away the complexity of deploying WordPress on Kubernetes, making it accessible to developers of all skill levels.
Consistency: Bitnami’s extensive experience in packaging applications ensures that the WordPress deployment is reliable and consistent across different environments.
Customization: While the default configuration works out of the box, you have the flexibility to customize various aspects of the deployment to suit your specific requirements.
Scalability: Kubernetes enables seamless scaling of WordPress instances to handle varying levels of traffic and workload.
A common use case example
Let’s say you want to deploy WordPress with high availability being able to scale horizontally. Checking the README you will want to increase the replicaCount from default 1 to N.
This figure summarizes the components we would have:
Ingress: you might need a more complex ingress configuration if you want to enforce security with network.
WordPress pods: instead of having a single replica, you will want N, being able to grow automatically.
Mysql service: here lives most of your WordpPress state, except uploads.
Memcached: make your frontend fast! Avoid touching the DB over and over again for the same posts.
However, once you can have N pods you need common storage for certain things. If you are lucky with the requirements you’d better not offering installing plugins from the interface and you should burn them in a custom image or in a customPostInitScript. That way you can have this config which only uses the shared volume for uploads and config:
extraEnvVars:
- name: WORDPRESS_DATA_TO_PERSIST
# Note: we avoid persisting plugins/themes for performance reasons
value: "wp-config.php wp-content/uploads"
If you need to offer plugin installation through the admin interface it will mean you will need to use a really fast volume for that. E.g Azure Files is really bad for that because of all those tiny PHP files, even using the premium offering. I thought OP Cache would limit the impact but it was not enough, leave a comment if you know certain tweak related to this as I was unable to make it well enough and the admin interface was horrible to use. At least the user facing part can be easily cached thought.
Lastly, you really want to enable Memcached. You need to use a deployed Memcached pod or you can use an external service. You will need to use the W3 Total cache plugin so that you can take advantage of it.
memcached:
enabled: true
Common pitfalls and solutions
Troubleshooting hints
You might be reproducing performance issues, the best thing you can do is deploying root run pods in DEV so that you can add a few var_dumps or even installing xdebug which will find the culprit for sure:
Note: Be aware that this is horrible for production envs. I recommend only enabling it in local/DEV k8s!!
# Configuration to run wordpress as root.
# Only enable for troubleshooting, e.g profiling with xdebug
#podSecurityContext:
# enabled: true
# fsGroup: 0
#containerSecurityContext:
# runAsNonRoot: false
# runAsGroup: 0
# runAsUser: 0
# readOnlyRootFilesystem: false
# privileged: true
# allowPrivilegeEscalation: true
You might also need to disable health checks so that you can debug stuff there:
Note: same note, only for local/DEV envs.
# Health checks override, only set as false for troubleshooting
#livenessProbe:
# enabled: false
#readinessProbe:
# enabled: false
#startupProbe:
# enabled: false
Populating the volume and editing wp-config.php
You probably need to fill the /uploads folder or tweak the wp-config.php file. Just use kubectl cp.
About wp-config.php persistence
The config file is generated according to the Values.yaml when helm install is run but not with upgrade, that is an expected behaviour. However, at least you can override the database config, which is a common thing you might need to change:
# We are persisting wp-config.php but we need to update the DB when needed
overrideDatabaseSettings: yes
Additionally, if you need to update the wp-config.php file you can use kubectl cp. An alternative would be using a secret for the config instead (check existingWordPressConfigurationSecret in the README).
Running customPostInitScripts every time the pods are created.
You can try this workaround, thank me in the comments or provide a better solution if you know it please:
my-script.sh: |
#!/bin/bash
set -x
# Plugins repository is https://wordpress.org/plugins
#export WP_CLI_PACKAGES_DIR=/bitnami/wordpress/wpcli-packages
# Workaround for https://github.com/bitnami/charts/issues/21216
(sleep 10 && rm -f /bitnami/wordpress/.user_scripts_initialized)&
echo "Finished my-script.sh"
Customizing more stuff
If the bitnami chart values.yaml is not enough for you use case you can always create your own chart which uses the bitnami one as a child. E.g that way you can have your own ingress.yaml file:
You can also fork the chart easily just copying locally and using a local reference instead of OCI. That is also a solution if you want to make sure you don’t depend on docker.io for chart retrieval.
The Bitnami Helm Chart provides a hassle-free solution for deploying WordPress on Kubernetes, allowing developers to focus on building and managing their websites without getting bogged down by infrastructure concerns. By leveraging the power of Helm and Kubernetes, deploying WordPress has never been easier or more efficient. Whether you’re a seasoned Kubernetes pro or just getting started, the Bitnami Helm Chart for WordPress is a valuable tool in your arsenal for modern web development. However, there are different use cases that require different configurations, and you’ll need to work on that.
About this blog post
“A common use case example” and “Common pitfalls and solutions” have been 100% written by humans, whereas the rest of the blog post has been generated with LLM and tweaked a bit with extra details.
In an era where information is constantly flowing through various forms of media, the need to extract and transcribe audio content has become increasingly important. Whether you’re a journalist, a content creator, or simply someone looking to convert spoken words into written text, the process of transcribing audio can be a game-changer. In this guide, we’ll explore how to transcribe audio from an MP4 file to text using Whisper AI, a powerful automatic speech recognition (ASR) system developed by OpenAI.
Related video from my Youtube channel:
What is Whisper AI?
Whisper AI is an advanced ASR system designed to convert spoken language into written text. It has been trained on an extensive dataset, making it capable of handling various languages and accents. Whisper AI has numerous applications, including transcription services, voice assistants, and more. In this guide, we will focus on using it for transcribing audio from MP4 files to text.
Prerequisites
Before you can start transcribing MP4 files with Whisper AI, make sure you have the following prerequisites in place:
Docker: Docker is a platform for developing, shipping, and running applications in containers. You’ll need Docker installed on your system. If you don’t have it, you can download and install Docker.
MP4 to MP3 Conversion: Whisper AI currently accepts MP3 audio files as input. If your audio is in MP4 format, you’ll need to convert it to MP3 first. There are various tools available for this purpose. You can use FFmpeg for a reliable and versatile conversion process.
Finally, use the following command to transcribe the MP3 file to text using Whisper AI. In this example, we’re specifying the model as “small” and the language as “Spanish.” Adjust these parameters according to your needs:
docker container run --rm --volume ${VOLUME_DIRECTORY}:/data whisper --model small --language Spanish /data/$FILE_NAME
Once you execute this command, Whisper AI will process the audio file and provide you with the transcribed text output.
You’ll see transcription is outputted through stdout so consider piping the docker run to a file.
docker container run --rm --volume ${VOLUME_DIRECTORY}:/data whisper --model small --language Spanish /data/$FILE_NAME &> result.txt
You can monitor how it goes with:
tail -f result.txt
If you see a warning like:
/usr/local/lib/python3.9/site-packages/whisper/transcribe.py:114: UserWarning: FP16 is not supported on CPU; using FP32 instead
It will mean that you lack a CUDA setup so it will run using your CPU.
Also notice that here we’re using the small model, which is good enough but perhaps too slow with CPU usage. In my machine, it takes like 2.5 hours to transcribe 3 hours of audio.
Conclusion
Transcribing audio from MP4 to text has never been easier, thanks to Whisper AI and the power of Docker. With this guide, you can efficiently convert spoken content into written text, opening up a world of possibilities for content creation, research, and more. Experiment with different Whisper AI models and languages to tailor your transcription experience to your specific needs. Happy transcribing!
Note: I’ve written this blog post with the help of ChatGPT based on my own experiments with Whisper AI. I’m just too lazy to write something coherent in English. Sorry for that, I hope you liked it anyway.
Prompt: “Write a blog post whose title is HOWTO transcribe from mp4 to txt with Whisper AI. It should explain what Whisper AI is but also explain how to extract mp3 from mp4, and the following commands, ignore first column: 10054 git clone https://github.com/hisano/openai-whisper-on-docker.git 10055 cd openai-whisper-on-docker 10056 docker image build –tag whisper:latest . 10057 VOLUME_DIRECTORY=$(pwd) 10058 FILE_NAME=hello.mp3 10059 cp ../20230503_094932-Meeting\ Recording.mp3 ./hello.mp3 10060 docker container run –rm –volume ${VOLUME_DIRECTORY}:/data whisper –model small –language Spanish /data/hello.mp3” . After that, I added some extra useful information about performance.
Linux Cinnamon is a popular desktop environment used by many Linux users. While it is generally stable and reliable, like any software, it can sometimes fail or crash. When this happens, it can be frustrating for users who rely on Cinnamon to get their work done. In this blog post, we will explain why Cinnamon might fail and how to restart it when it does.
Why does Cinnamon fail?
There are several reasons why Cinnamon might fail or crash. Some common causes include:
System updates: Sometimes, updates to the Linux system or other software can cause compatibility issues that result in Cinnamon failing.
Hardware issues: If there is a problem with your computer’s hardware, such as a failing hard drive or faulty RAM, it can cause Cinnamon to crash.
User error: Occasionally, a user may accidentally make changes to their system or Cinnamon configuration that cause it to fail.
Bugs in Cinnamon: While Cinnamon is generally a stable and reliable desktop environment, it is not immune to bugs or other issues that can cause it to fail.
How to restart Cinnamon
If Cinnamon fails, the first step to take is to try restarting it. Here are the steps to follow:
Press Ctrl + Alt + F2 on your keyboard. This will take you to a command line interface.
Enter your username and password to log in.
Type the following command to stop the Cinnamon process: pkill -HUP cinnamon
Wait a few seconds, then type the following command to start Cinnamon again: cinnamon --replace &
Press Ctrl + Alt + F7 on your keyboard to return to the Cinnamon desktop environment.
If Cinnamon does not restart using these steps, you may need to try restarting your computer or troubleshooting other potential issues.
In conclusion, while Linux Cinnamon is generally a stable and reliable desktop environment, it can fail or crash for various reasons. When this happens, it can be frustrating, but restarting Cinnamon can often resolve the issue. If you are unable to restart Cinnamon using the steps outlined in this post, you may need to seek additional support or troubleshooting resources.
Bonus track!
There is indeed a more straightforward way to restart Cinnamon. Here are the steps to follow:
Press Alt + F2 on your keyboard. This will open the “Run Command” dialog.
Type the letter “r” into the text field and press Enter. This will restart the Cinnamon process.
Wait a few seconds for Cinnamon to restart. If everything has gone smoothly, you should be able to continue using Cinnamon as normal.
Using Alt + F2 and typing “r” to restart Cinnamon is a quick and easy way to get your desktop environment back up and running if it has failed or crashed. This method does not require logging in to the command line interface or typing any commands, making it more accessible for users who may not be familiar with the command line.
Have you ever run out of space on your root partition and wished you could make it bigger? Or maybe you had a separate swap partition that you wanted to get rid of? Well, fear not, my friend, because today we’re going to be diving into the world of resizing partitions and making the switch to using a swap file instead of a partition.
First of all, let’s talk about why this is possible. The ext4 file system, which is the default file system for most modern Linux distributions, allows for resizing and modifying the partition layout on the fly. This is thanks to the advanced features of ext4, such as its ability to handle online resizing and the use of an advanced journaling system.
Now that we’ve got the basics out of the way, let’s get down to business.
Backup your data
Before you do anything, it’s essential to backup your data. You never know what might go wrong during the resizing process, so it’s always better to be safe than sorry. You can use tools like rsync or tar to backup your important files to another location.
Disable swap
Before we begin resizing the root partition, we need to disable the swap partition. This is because the swap partition may be in use while we are trying to resize it. You might also need to remove it so that you can increase the boundaries of the resize you need. To disable swap, you can use the following command:
sudo swapoff -a
Resize the root partition
Next, we need to resize the root partition. We can do this using the resize2fs tool. In this example, we will be increasing the size of the root partition to 20GB:
sudo resize2fs /dev/sda2 20G
Note that you’ll need to replace “/dev/sda2” with the name of your root partition.
Create the swap file
Now that we’ve resized the root partition, it’s time to create the swap file. A swap file is a file on your file system that is used as virtual memory. To create the swap file, we will use the fallocate tool. In this example, we will be creating a 4GB swap file:
sudo fallocate -l 4G /swapfile
Configure the swap file
Once the swap file has been created, we need to configure it as a swap space. To do this, we will use the mkswap tool:
sudo mkswap /swapfile
Enable the swap file. Finally, we need to enable the swap file so that it can be used as virtual memory. To enable the swap file, use the following command:
sudo swapon /swapfile
Update /etc/fstab
At this point, the swap file is fully configured and ready to use. However, we need to update /etc/fstab to enable the swap file on boot. To do this, add the following line to /etc/fstab:
/swapfile none swap sw 0 0
Also, make sure you remove the old swap partition line. Otherwise, the system will try to check it every time you book taking more time!
And that’s it! You’ve successfully resized your root partition and switched from a swap partition to a swap file. Your system should now boot faster since it no longer has to test the swap partition on each boot.
In conclusion, resizing partitions and switching from a swap
partition to a swap file is a simple and effective way to manage your disk space and optimize your system’s performance. With the ext4 file system, the process is straightforward and can be done without having to take your system offline. Whether you’re running out of space on your root partition or just looking to streamline your system, I hope this guide has helped you accomplish your goals.
As always, when working with system configurations and disk partitions, it’s important to proceed with caution and to backup your data before making any changes. If you follow the steps outlined in this guide, you should have no trouble successfully resizing your root partition and switching to a swap file.
So, grab your terminal and get ready to play around with partitions and swap files. Who knows, you might just discover a new love for system administration.
When I was in college, I studied Eliza, one of the first natural language processing programs developed in the 1960s. Eliza was designed to simulate a psychotherapist and used a set of pre-defined rules and responses to generate replies to user input. At the time, Eliza was considered a significant advancement in the field of natural language processing, but it was limited in its abilities and could not provide detailed or accurate responses to complex questions.
Today, we have programs like ChatGPT, a large language model trained by OpenAI that uses the latest advancements in natural language processing to generate human-like responses to questions and prompts. ChatGPT was trained on a vast amount of text data from a variety of sources, which allows it to have a broad range of knowledge and the ability to provide detailed, accurate responses to a wide range of questions.
Here is a sample snippet of code for the Eliza program:
// Define a set of rules for generating responses
const rules = [
{key: "i need", response: "Why do you need"},
{key: "i want", response: "What would it mean to you if you got"},
{key: "i feel", response: "Do you often feel"}
];
// Define a function for generating a response to user input
function generateResponse(input) {
// Use the find() method to look for the first rule that matches the input
const rule = rules.find(r => input.includes(r.key));
// If a match is found, return the corresponding response
if (rule) {
return rule.response;
}
// If no rules match, return a default response
return "I'm sorry, I don't understand what you're saying.";
}
If you want to get a full implementation of Eliza, you can visit the following link on GitHub: https://github.com/brandongmwong/elizabot-js. This repository contains the complete source code for Eliza written in JavaScript, along with detailed instructions on how to use and customize it. In addition, the repository includes a live demonstration of Eliza in action, allowing you to see how it works and how it compares to other artificial intelligence systems.
Compared to Eliza, ChatGPT is much more advanced and can provide more detailed and accurate responses to user input. While Eliza used pre-defined rules and answers to generate its replies, ChatGPT uses machine learning algorithms and a vast amount of training data to generate its responses. This allows ChatGPT to have a much broader range of knowledge and the ability to provide accurate answers to complex questions.
Overall, while Eliza was a significant advancement in its time, it is now limited compared to more advanced programs like ChatGPT. ChatGPT’s ability to generate detailed, accurate responses to a wide range of questions makes it a valuable tool in the field of natural language processing.
Additionally, the book “The Master Algorithm” by Pedro Domingos provides an overview of the field of machine learning and discusses how it relates to natural language processing and programs like ChatGPT. This book is a valuable resource for anyone interested in learning more about the technology behind ChatGPT and how it is used in the field of artificial intelligence.
Overall, these books provide a wealth of information about natural language processing and its applications, including ChatGPT and Eliza. They are valuable resources for anyone looking to learn more about these technologies and how they are used in the field of artificial intelligence.
There are many science fiction books that feature artificial intelligence or advanced natural language processing technology that is related to ChatGPT. Some books that you may be interested in include:
“The Hitchhiker’s Guide to the Galaxy” by Douglas Adams: This humorous science fiction novel features a ship’s computer named Deep Thought, which is capable of advanced natural language processing and can answer complex questions.
“Ready Player One” by Ernest Cline: In this novel, a virtual world called the OASIS is inhabited by intelligent avatars that are capable of sophisticated communication and problem-solving.
These books are all science fiction stories with advanced artificial intelligence or natural language processing technology. They may be of interest to readers who are interested in the capabilities and potential consequences of such technology.
This blog post has been 100% generated by ChatGPT.
In the future, bloggers may have to compete with tools like ChatGPT that can quickly and efficiently generate high-quality content. However, there are also opportunities for bloggers to differentiate themselves from AIs like ChatGPT. For example, bloggers who offer unique perspectives or have a distinct voice can stand out from the crowd and continue to be valuable to their audiences.
A few notes about the main points I learnt installing triple boot into my new PC:
When picking the hardware components, search for success stories related to such components so that you make sure they’re compatible and someone has already prepared configuration you can work on instead of building the setup from zero. E.g Non APU Ryzen (without G) + Gigabyte X570 + Radeon RX580
Be aware that if you want to use Hackintosh as your only OS, intel will be easier and better supported, e.g docker with hypervisor, Adobe suite… My idea is using Linux, leaving OSX option for Xcode and Windows10 for gaming and win-only software.
OpenCore is currently the only option for AMD, do not lose time reading about clover. See this video as an intro, not enough to get into action but you’ll get a general idea: https://www.youtube.com/watch?v=l_QPLl81GrY
You can lose data quite easily, e.g touching partitions, so make sure you backup if needed.
Once you’ve seen the video and read the guide you’ll be ready if you understand these topics: Boot USB, STDT, ACPI, KEXT, UEFI, config.plist, SMBIOS
If you find someone who already succeeded with your same CPU + Motherboard (e.g lucky me!) it will be way more easier to setup, as you might avoid the pain of testing different kexts and configs) but you still need to make sure you understand what you’re doing (previous points). Otherwise your Mac install menu will appear in Russian and you’ll have to figure out why that happens and how to reset NVRAM.
You need to installs OSs in this order: Windows, Linux, Mac (3 pendrives). Both Windows and Linux need to be running in UEFI mode, and once both are running like that, you’ll need to resize the UEFI partition to at least 200MB as it’s a Mac requirement. (EFI created by default by Windows is 100MB…)
You also need a Gparted USB so that you can create the Mac partition with the free space that you left after installing Windows and Linux, you’ll use HPFS+ but in Mac install partitions tool you’ll need to enable journaling for it (File > Enable Journaling) and convert it to APFS. Otherwise it will complain about lack of “firmware partition” (UEFI) even though you had already prepared it.
In the middle of the installation it will reboot without warning and restart going on the installation from the disk.
If the latest Realtek kext does not work for you, e.g unable to configure NIC on installation, try with v2.2.2, it did the trick for me.
Once successfully installed you typically need to do a few postinstall things:
Just in case Windows update messes up with opencore boot loader make sure you install BootStrap.efi in BIOS. That way you’ll always have the “OpenCore” option in BIOS.
You need to update the hard disk UEFI partition. If you prepare the USB BOOT MAC drive with gibmacos you might not have an EFI partition there, you just need to mount the EFI hard disk partition manually, delete its EFI folder and drop the one you have in the USB BOOT.
If OpenCore is unable to detect Linux, make sure you installed it in UEFI mode, e.g in Linux mint picking the UEFI partition as boot partition.
It s an application where raw CPU power is rarely a limiting factor and the problems are the amount of data, the complexity of data, and the speed at which it changes. It is built from standard building blocks that provide commonly needed functionality.
In this chapter, we see the fundamentals of what we are trying to achieve.
Recently I needed to close a Google Apps account, and I tried to migrate albums programmatically. I’ll document here the needed steps and explain why this Google API is useless for most of us:
First you need an app token, you can get it from Google Console on https://console.developers.google.com. There you need to register your project and associate API from the library.
You should now have both client_id and client_secret so you can fetch the code quite easily with a OAUTH2 flow:
But finally, I just did it manually zooming out from web client. It happens that Google just offers consents that allow you to manipulate photos and albums created with your own app, so you can’t move around photos between albums created by the official too. This means you cannot organize your library automatically unless you just need to work with photos you would upload with your own app…
2019 was an awesome year for me, mainly because I became father 🤗 but I also found time to keep my learning habit 🤓, something very important after 15 years since my first job in the field. So I’d like to list and elaborate on the Coursera courses I did and why:
Conflict Resolution Skills (cert): a good introduction, something essential even if you’re in an individual contributor position but critical in management.
Kotlin for Java developers (cert): a great course in order to jump from Java to Kotlin. We’ve been increasingly using Kotlin at work (even for microservices!) so I found it was a good way to review the language in general.
Programming Languages, Part A (cert): getting into functional programming was something I wanted to do for a long time, I did some Haskell at uni but that was ages ago and I knew typical few things used in JavaScript or Kotlin but using a pure FP language is a very different thing.
Programming Languages, Part B (cert): Part A used SML, this other part used Racket which was a bit parenthesis nightmare at first but it turned to be very fun as I practiced implementing a little programming language something I hadn’t done since university,
If you have a recommendation of any online course for 2020 please leave a comment 🙂