RAFT & PAXOS
31 July 2020
30 July 2020
27 July 2020
24 July 2020
20 July 2020
Rust Is Not Yet A Better Language
- Questionable and at times dodgy Rust arithmetic
- Functional calls touch memory twice in Rust
- Rust is not faster than C++
- Unproven safety mechanism in Rust
- Painful rewrite of C library headers
- Compilation times are slow
- Rust is a pain, lacks transparency, and inconsistent to work with compiler where some things are documented rather than properly checked
- Integration with other languages is difficult
- Rust has a bigger assembly code footprint than C++
- Unsafe blocks are not checked
- Most of enterprise and technology products are built using C/C++/Java interface where a complete rewrite might be required for Rust
- Rust doesn't play nice with other languages
- Rust ecosystem tools are insufficient for prime time use
- Tedious and verbose
- No formal community specifications and release process
18 July 2020
L4-L7 Network Services Definition
L4 - Transport e.g TCP/UDP
L5 - Session e.g. connections
L6 - Presentation e.g. SSL/TLS
L7 - Application e.g. HTTP/SIP
Set of functions for: Firewall, Load Balancing, Service Discovery, and Monitoring.
Labels:
big data
,
Cloud
,
data science
,
devops
,
distributed systems
,
microservices
,
telecommunications
16 July 2020
15 July 2020
Optimization
When should one optimize their implementation? Definitely, not during a prototype stage! Also, definitely don't spend time optimizing an incorrect solution that doesn't solve the business case and wastes valuable man hours. Most software development practices follow three core principles: implement, test, refactor. At the implementation stage one is trying to codify the algorithmic logic. The testing stage tests for the implementation logic. Once the implementation has been tested, it can be refactored. In fact, in agile practice such processes may even be cyclical against granular pieces of functional code that work against a scoped feature. Generally, code practices dictate loose coupling and high cohesion. While treating algorithmic implementations as functional blackboxes against their parameters. It is only after refactor stage can one start to look at aspects of performance, reliability, and scalability. Even at this stage, one has to first identify the bottleneck as premature optimization is bad. Definitely, profiling helps. And even better approach is to continuously build/deploy the implementation artefact for isolated testing where one can throw data at it in a simulated mode. In fact, when deploying an implementation to the cloud one has to view constraints both at a system-level as well as application-level. And, in many cases, optimization constraints may be negligible enough at application-level that it can be offloaded to the system-level.
- Don't bother optimizing a solution in prototype mode, focus on solving the problem (what if the solution is incorrect, one might be wasting time optimizing an incorrect solution)
- Focus on testing the implementation
- Once the implementation is correct, refactor it - focus on high cohesion, loose coupling
- Keep implementation loosely coupled from third-party libraries (also, don't get hung up on such things as whether the third-party library is using a c implementation for optimization, it is more important at this stage to make sure the algorithmic implementation is correct)
- Treat third-party libraries as dependencies
- Use profilers and correct metrics to check for performance
- If there is a bottleneck identify where it is at application-level or system-level
- Only optimize as an afterthought and when a bottleneck is identified (How can one optimize for something without knowing where the bottleneck is? In some cases, through experience, one might even know earlier in the implementation where a bottleneck might occur, in which case, eager optimization can be compensated from experience and may in fact be beneficial to save time later in the process)
- Don't optimize for the sake of optimizing, it may be unnecessary, especially in the cloud
- When a dependency is the bottleneck, replace it with another, more performant dependency or create own
- Swapping dependencies should not affect the algorithmic implementation (as long as the algorithmic implementation to use is also correct in the third-party dependency), which is the whole point of using functions as blackboxes that provide parameter passing. Create a wrapper if needed. Use appropriate best practices and patterns.
- Is the bottleneck at application-level negligible enough to be offloaded to system-level cloud infrastructure?
- Don't eagerly optimize early and often - premature optimization only leads to more issues and complexity
- Only optimize when there is a logical need to do so (be pragmatic)
- With increasing level of experience, one can deduce when, where, and how to optimize for the outcome of results
"The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming. " - Donald Knuth
"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. " - Donald Knuth
14 July 2020
kops
Labels:
big data
,
Cloud
,
computer science
,
data science
,
distributed systems
,
machine learning
,
natural language processing
8 July 2020
6 July 2020
3 July 2020
1 July 2020
Types of Narratives
- Empirical Narratives
- Fictional Narratives
Labels:
big data
,
data science
,
deep learning
,
machine learning
,
natural language processing
,
text analytics
Narratology
- Cognitive Narratology
- Contextualist Narratology
Labels:
big data
,
data science
,
deep learning
,
machine learning
,
natural language processing
,
text analytics
Discourse Modes
- Narrative
- Argumentative
- Expository
- Descriptive
- Instructive
Labels:
big data
,
data science
,
deep learning
,
machine learning
,
natural language processing
,
text analytics
Subscribe to:
Posts
(
Atom
)