Tech – FMath 22 https://www.fmath.info Conference Fri, 18 Nov 2022 09:30:03 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.2 https://www.fmath.info/wp-content/uploads/2021/12/cropped-maths-32x32.png Tech – FMath 22 https://www.fmath.info 32 32 iOS and Android OS: Similarities and Differences https://www.fmath.info/ios-and-android-os-similarities-and-differences/ Fri, 18 Nov 2022 09:29:59 +0000 https://www.fmath.info/?p=8011 iOS and Android are the major phone and tablet operating systems. The latter is the most popular worldwide while the former is the most popular in the USA with 60% […]

The post iOS and Android OS: Similarities and Differences appeared first on FMath 22.

]]>
iOS and Android are the major phone and tablet operating systems. The latter is the most popular worldwide while the former is the most popular in the USA with 60% of users. Both of the systems have their strong points and weak points. Take a look at the discussion below to learn more.

What is the difference between iOS and android?

If you take a quick look at each of these operating systems you’ll identify several similarities in their capabilities as both can perform major tasks. But, a detailed look at some of the main operations and support will prove otherwise. Take a walk through some of these categories to learn the difference.

Convenience

To begin, an iOS is easier to use than an Android because most of its apps are native. This ease of use is achieved because both the hardware and the software are designed by the same manufacturer hence they work smoothly. Also, it offers smooth usage and linking across all its gadgets.

Second, Android provides more room for customization of its interface and making adjustments like changing appearance than iOS. This makes it the best choice for those who love customization.

Besides, it’s also quite easier to multitask on Android than on iOS. One can comfortably perform multi tasks such as splitting displays to playing videos and carrying out other tasks at the same time. 

Android also offers an option for expanding storage using SD cards. The iPhone on the other hand only offers this option on its flagship smartphones. Regardless, Apple cutters for this by offering large internal storage.

If you look at it in terms of creativity,  Android offers more design features than iOS. For example phones like Samsung come as folding or stylus devices while the iPhone only comes with a standard design. The Android phones also offer a more full-screen display than iPhone. This is because they have small-size or retractable selfie cameras. 

Apple users can also easily synchronize and manage their files and data across all Apple devices.

Last, most app developers prefer to first design their applications on the iPhone platform and then later move to other platforms. Therefore, Apple users have early access to most applications. Moreover, the applications are more integrated into Apple devices than they’re on other devices.

So what about access to personal data? Keep reading to learn more.

Privacy

First, both users of these operating systems lose some of their privacy during app installation. Generally, most apps make requests for access to certain information such as contact and media files.

Android apps have allowed its user to grant permission for this request. However, most apps exploit this chance by requesting several things. This has been noticeable in cases where a single app asks for less access in an iOS than it does in a Google operating system.

Generally, iOS offers more control over permission granted to applications than Android. It also offers advanced encryption that can further keep the device data safe even if you lose your fall into wrong hands.

Repairability

First, when it comes to repairability, the iPhone is generally difficult to repair. This is so if looked at regarding the restrictions of third-party repair. Often, most of the iPhone’s components are linked to its software. Therefore, in case a third party performs a replacement, the new part will either lose some features or show warning messages. Though, one can avoid this by accessing repairs from brand manufacturers or authorized dealers.

Second, most android devices need more time to repair, for example, the latest Samsung. This is because they require detailed teardowns when swapping components like batteries or screens. But, on the good side, they’re generally known to have hardware that is more accessible.

Last, based on the rate of access to repair, Apple leads as their support centers and authorized dealers are widely spread. Moreover, Apple has an online iPhone platform where its users can ask for help from other users and experts. Also, the general maintenance of an Apple device is low since its components are durable. When compared based on ease of breaking, both the iOS and Android can break. Though you can get proficentily fixed cracked iPhone screens in NYC.

Support

This is one area where iOS has been taking lead. It provides consistent and timely updates for its users than Google’s operating system.

Moreover, the duration at which they support their devices is longer than that of Android. The former offer a range of 5 to 7 years’ support while the latter offer 3 to 5 years’ support. This is great for Apple users with old devices as they’re protected against the latest threats.

Lastly, Android users have the challenge of receiving updates at the same time because of its wide ecosystem. So, once Google releases updates each brand manufacturer determines its own time when they’ll roll out their updates. On the contrary, Apple users receive their respective updates at the same time. 

Conclusion

To sum this up, Either an iPhone or Android is a good choice based on one’s needs. Though, if you’re considering a gadget that’s easy to use, has a longer life span and has swift performance then the former will be a perfect choice. 

Meta Description:

iOS and Android are major operating systems. This article takes a deeper look at some of their similarities and differences.

The post iOS and Android OS: Similarities and Differences appeared first on FMath 22.

]]>
5 DevOps tips to help the novice developer https://www.fmath.info/5-devops-tips-to-help-the-novice-developer/ Thu, 02 Sep 2021 19:07:15 +0000 https://wordpress.iqonic.design/epy_wp/?p=2467 DevOps is in high demand in technology today, from CI/CD (continuous software integration and deployment) to container management and server preparation.

The post 5 DevOps tips to help the novice developer appeared first on FMath 22.

]]>

DevOps is in high demand in technology today, from CI/CD (continuous software integration and deployment) to container management and server preparation. You might even say it’s a buzzword… on the ear. As a developer, you can be part of the DevOps team – not necessarily preparing servers for work and managing containers, but creating great software.
A lot of what developers, DevOps engineers and IT teams do in today’s software development lifecycle is tools, testing, automation and server orchestration.
Especially if the team is involved in a large Open Source project or we’re talking about one person. Here are five DevOps tips for developers who want to work more efficiently and faster.

YAML makes frontend work easier

Introduced in 2001, YAML has become one of the languages for many declarative automations – it is often used in DevOps and development of different interface configurations, automation and so on. YAML stands for Yet Another Markup Language. YAML markup is easy to read. It puts less emphasis on round and curly bracket characters and quotes {}, [], “.
Why is this important? By learning or even improving your YAML skills, you can more easily save configurations for applications, such as settings in an easy to write and read language.
YAML files are everywhere, from corporate development workflows to open source projects. Lots of YAML files on GitHub, too (they support a product we really like: GitHub Actions, but more on that later).

DevOps tools help speed things up

Let’s get something straight first: DevOps tools are a broad concept that covers cloud platforms, server orchestration tools, code management, version control, and more. These are all technologies that make writing, testing, deploying, releasing software easier and leave any worries about unexpected failures in the past. Here are three DevOps tools to speed up workflows and focus on building great software.
Git
You probably know that Git is a distributed version control system and source code management tool. For developers, it’s the foundation of the basics and a popular DevOps tool.
Why?

Cloud-based integrated development environments (IDEs)

I know, it’s kind of hard to say out loud (thanks, marketing). A simpler way would be cloud IDEs. But these platforms are worth exploring immediately.
And here’s why. Cloud IDEs are fully hosted development environments that allow you to write and run code, debug it, and quickly deploy new, pre-configured environments. Need validation? We launched our own cloud-based IDE, Codespaces, at the beginning of the year and started using it to build GitHub. It used to take us up to 45 minutes to deploy new developer environments – now it only takes 10 seconds.
With cloud IDEs, it’s very easy and fast to deploy new, pre-configured development environments, including one-offs. Plus, with them, you don’t have to think about computer power (hello to all those who dare to write code on tablets).

Server orchestration for greater flexibility and speed

If you’re building a cloud application or even just using different servers, virtual machines, containers or hosting services, you’re probably dealing with multiple environments. Being able to make sure that the application and infrastructure fit together reduces your dependence on the development team trying to run software on your infrastructure at the last minute.
This is where server orchestration comes in handy. Server or infrastructure orchestration is usually the task of IT and DevOps teams. These include setting up, managing, preparing, and coordinating systems, applications, and underlying infrastructure to run software.

Try writing repetitive tasks in Bash or PowerShell

Imagine: you have a bunch of repetitive tasks running locally, and they take too much time each week. There’s a better, more efficient way to handle them – write scripts with Bash or PowerShell.
Bash has deep roots in the Unix world. It’s the backbone for IT, DevOps teams, and many developers.
PowerShell is younger. Developed at Microsoft and launched in 2006, PowerShell replaced the command shell and early scripting languages for task automation and configuration management in Windows environments.
Today, both Bash and PowerShell are cross-platform (although most people used to working in Windows use PowerShell, and most people familiar with Linux or macOS use Bash).

 

The post 5 DevOps tips to help the novice developer appeared first on FMath 22.

]]>
Neural networks may be simpler than people think https://www.fmath.info/neural-networks-may-be-simpler-than-people-think/ Fri, 13 Aug 2021 20:12:55 +0000 https://wordpress.iqonic.design/epy_wp/?p=1014 Neural networks partly seem to undermine the traditional theory of machine learning, which relies heavily on ideas of probability theory and statistics. What is the mystery of their success?

The post Neural networks may be simpler than people think appeared first on FMath 22.

]]>

Neural networks partly seem to undermine the traditional theory of machine learning, which relies heavily on ideas of probability theory and statistics. What is the mystery of their success?

The researchers show that networks with an infinite number of neurons are mathematically equivalent to simpler machine learning models – kernel methods. The striking results can be explained if this equivalence extends beyond “perfect” neural networks

ML models are generally thought to perform better when they have the right number of parameters. If there are too few parameters, the model may be too simple and fail to capture all the nuances. Too many parameters and the model becomes more complex, learning such fine details that it then cannot generalize. That’s what’s called overlearning.

“It’s a balance between learning too well from the data and not learning at all. You want to be in the middle,” says Mikhail Belkin, a machine learning researcher at the University of California, San Diego, excited by the new prospects.

Deep neural networks like VGG are widely believed to have too many parameters, which means their predictions should suffer from overtraining. But this is not the case. On the contrary, such networks generalize new data with surprising success. Why? No one knew the answer to this question, although they tried to find out the reason.

Naftali Tishbi, a computer scientist and neuroscientist at the Hebrew University of Jerusalem, argued that deep neural networks first learn from data, and then go through an information bottle-neck, discarding irrelevant information. This is what helps in generalization. Other scientists believe that this does not happen in all networks.

The mathematical equivalence of kernel methods and idealized neural networks gives clues as to why and how networks with a huge number of parameters arrive at their solutions.

Kernel methods are algorithms that find patterns in data through their projection onto very high dimensions. By studying the more comprehensible equivalents of kernels of idealized neural networks, researchers can learn why complex deep networks converge in the learning process to solutions that generalize well to new data.

“A neural network is a bit like a Rube Goldberg machine. It’s unclear what’s really important about this machine,” Belkin argues. – Nuclear methods are not that complicated. I think simplifying neural networks to nuclear methods allows you to isolate the driving force behind what’s going on.”

The post Neural networks may be simpler than people think appeared first on FMath 22.

]]>
Mathematics for the programmer https://www.fmath.info/mathematics-for-the-programmer/ Fri, 23 Jul 2021 22:43:37 +0000 https://wordpress.iqonic.design/epy_wp/?p=526 One of the most frequent questions that newcomers, people who are far from programming, and one of the biggest stereotypes of our time ask: does a programmer need mathematics?

The post Mathematics for the programmer appeared first on FMath 22.

]]>

One of the most frequent questions that newcomers, people who are far from programming, and one of the biggest stereotypes of our time ask: does a programmer need mathematics? And no one will give a complete answer to this question. This is due to the fact that there are many directions in programming.

Modern programming languages, which are very popular at the same time, now, can solve many problems very quickly, and their toolkit is specially made so as not to cause discomfort for developers in the development process.

Of course, most modern developers prefer to go more to Frontend, Backend and not to create problems with learning languages of any level.

Programmers who work in these areas and write in JavaScript, Python, PHP, etc. earn good money, work in high-level programming, know several technologies and do not perform complex mathematical calculations. Most of the time. It’s all good, especially when people know what they want. And when they are asked the question, “Do you need math?” they say that they only need basic math for this kind of work, but for more complex projects and technologies it is worth studying something even more complicated than the school curriculum.

And it is different when the same developers, who work only with high level and program sites, answer the same question. They say that the math is not necessary at all. Maximum is addition, subtraction, division and multiplication. And that you don’t need to go further than the law of combinatorics.

That makes sense. However, it is worth thinking about one important detail, which almost no one ever voiced. The fact that all computers and computers work with mathematics. And that at the origin of all programming is mathematics.

All software arithmetic is about numbers. Computers use a binary code (1 and 0). This is the code upon which systems run, from operating systems to neural networks. Anything that has to do with computation always interacts with numbers.

When complex calculations had to be done, pencil, paper and mind were used. But in the process of progress, realizing that such solutions require a lot of time. That’s why they started creating computers to automate certain processes. And for these processes to be automated, templates had to be developed. After all, all computerized machines work on the basis of previously created patterns. And so it turned out that the information space created today is the modernized patterns of the past.

Recall that all the people who created such things in the computational sciences always had a good knowledge of an unloved school subject. And modern computer technology hasn’t taken that science anywhere.

All complex low-level programming languages are based on mathematics, and modern high-level ones too, because they are based on nothing. And the higher the level of the language, the harder it is to create something complex and large.

Therefore often all try to avoid learning C/C++, Java and other similar languages, and prefer to go into web development, where the process of understanding of the direction and technology is easier, and pay is not worse.

Think about it, all the complicated stuff is written in low languages and includes mathematical knowledge. Of course you don’t need to learn an entire course of higher mathematics, but if you seriously want to create, for example, your own OS, write a cool framework, or a unique artificial intelligence, it will be almost impossible to do without a good mathematical knowledge and appropriate skills in the NLP.

Answering the question, “Does a programmer need math? “, I can safely answer, “Yes.” Whatever the programmer is and whatever he does, the more knowledge he has in the exact areas, the better it is for him as an expert.

This science should not be neglected, and it certainly cannot be said that only knowing addition, subtraction, multiplication and division will suffice.

The post Mathematics for the programmer appeared first on FMath 22.

]]>
Instability without synchronization https://www.fmath.info/instability-without-synchronization/ Sun, 04 Jul 2021 19:13:47 +0000 https://wordpress.iqonic.design/epy_wp/?p=2484 One of the oldest examples of engineering is a tree. No, not growing in the woods, but a tree thrown across a stream to cross it more conveniently, quickly and dryly.

The post Instability without synchronization appeared first on FMath 22.

]]>

One of the oldest examples of engineering is a tree. No, not growing in the woods, but a tree thrown across a stream to cross it more conveniently, quickly and dryly. These were the first bridges. Later they became more complex, people began to use stone and then metal. Cities grew, trade developed, and rivers, lakes, gorges, and hollows no longer made it difficult for people to move. Advances in technology, particularly transportation, enabled and even required bridges to be built larger, taller, and naturally longer. Despite the fact that cars in modern cities sometimes outnumber people (at least it often seems that way, especially when standing in traffic), pedestrian bridges still have not lost their relevance. The construction of a bridge requires accurate calculations, which will take into account all possible factors that, to varying degrees, may affect its stability and integrity. But even people walking on the bridge can cause it to sway. Scientists from the University of Georgia (USA) analyzed the Millennium Bridge in London and found that its instability has nothing to do with the synchronization of pedestrians, as previously thought. What kind of synchronization are we talking about, and what are the actual causes of wobbly pedestrian bridges? We’ll find the answers to these questions in the scientists’ report. Let’s go. Let’s go.

The basis of the study

The Millennium Bridge, which crosses the Thames, is one of the most famous pedestrian bridges in the world and a very popular landmark in London. It was opened, as the name suggests, in 2000 at the turn of the millennium. Its dimensions are not the most impressive: 4 meters wide and 370 long.

Millennium Bridge

The opening of the bridge was both a joyous and a sad day for its planners. First, Queen Elizabeth II herself attended the ceremony. Second, there were many people who wanted to walk on the new bridge – on the first day 100,000 people walked across it. This popularity also revealed the defect of the bridge – it was shaky, as it was subsequently nicknamed by Londoners. In trying to understand the cause of the defect, engineers concluded that it lay in resonance. Attempts to limit the number of pedestrians on the bridge at the same time led to crowded queues. So it was decided to add dampers, which solved the problem, and the bridge was reopened in 2002.

Nevertheless, scientists had questions about the cause of the swaying of the bridge and doubted that it was due to resonance.

To begin with, the topic of synchronization and resonance is worth touching on. Scientists explain that the synchronization of coupled almost identical oscillators leads to order in both natural and artificially created complex systems. One of the best explanations for these phenomena is considered the Kuramoto model, an example of which is often the instability of the Millennium Bridge on the day it was first discovered.

Japanese physicist Yoshiki Kuramoto proposed a mathematical model capable of describing synchronization. The essence of the model is that each of the coupled oscillators has its own frequency (ωi), and that each is coupled to all others equally.

However, many scientists back in the 2000s questioned whether the Kuramoto model could fully explain the cause of the Millennium Bridge’s wobbling.

In the paper we are considering today, scientists offer a different approach, the main essence of which is that any synchronization of pedestrian foot placement is a consequence, not the cause, of the instability of the bridge.

Interestingly enough, four days after the Millennium Bridge opened, Nobel laureate in physics Brian David Josephson said the following:

The Millennium Bridge problem has little to do with crowds walking in steps: it has to do with what people do when trying to keep their balance if the surface they are walking on begins to move, and is analogous to what might happen if several people get up in a small boat at the same time. In both cases, it is possible that the movements that people make in trying to keep their balance will cause any sway already present to increase, so that the sway continues to worsen.

The gist of this statement is that in order to maintain balance, each pedestrian must strive to lose angular momentum in their frontal plane. In addition, there is evidence that the forces on the left and right do not necessarily average out. Thus, transverse vibration energy is transferred from the pedestrian to the bridge. In fact, each pedestrian applies negative damping to the bridge.

 
 

 

 

The post Instability without synchronization appeared first on FMath 22.

]]>
Explanation of the Kalman filter in pictures https://www.fmath.info/best-budgets-for-business-events/ Wed, 09 Jun 2021 19:09:59 +0000 https://wordpress.iqonic.design/epy_wp/?p=2473 Surprisingly, not many software developers and scientists seem to know about it, which saddens me because it is a very generalized and powerful tool for combining information in the presence of uncertainty.

The post Explanation of the Kalman filter in pictures appeared first on FMath 22.

]]>

Surprisingly, not many software developers and scientists seem to know about it, which saddens me because it is a very generalized and powerful tool for combining information in the presence of uncertainty. Sometimes its ability to extract accurate information seems almost magical, and if you think I’m talking too much, take a look at this video in which I show how the Kalman filter determines the orientation of a free-floating body by looking at its velocity vector. Amazing!

What is it?

The Kalman filter can be used in any domain where there is uncertain information about some dynamic system, and you can make an educated guess about what the system will do next. Even if chaotic reality intervenes and affects the clear motion we assume, the Kalman filter often does a pretty good job of predicting what’s actually going to happen. And it takes advantage of correlations between crazy phenomena that you might not even think of using!

Kalman filters are ideal for continuously changing systems. They don’t take up too much memory (because they don’t need to store history other than the previous state) and are very fast, making them well suited for real-time and embedded system tasks.

In most of the articles you’ll find on Google, the math of the Kalman filter implementation looks pretty daunting. And that’s too bad, because in fact the Kalman filter is very easy
and easy to understand if you look at it from the right angle. So it makes a great topic for an article, and I’ll try to expose it with clear and understandable images and colors. You don’t have to do much, just know the basics of probability theory and matrices.

I will start with a vague example that can be solved with the Kalman filter, but if you want to go straight to pretty pictures and math, you can skip this section.

What can you do with a Kalman filter?

Let’s look at an example: you have created a little robot that can wander through the forest; in order to do the moving, it needs to know exactly where it is.

Our little robot.

Suppose our robot has a state $\vec{x_k}$, that is, just a position and a velocity vector.

Note that the state is just a list of numbers specifying the configuration of our system; it can be anything. In our example, it’s a position and speed vector, but it could also be the amount of liquid in the tank, the temperature of the car engine, the position of the user’s finger on the touchpad, or any number of objects you need to track.

Our robot also has a GPS sensor that has about 10 meters accuracy, and that’s good, but it needs to know its location more accurately in this 10 meter diameter. There are a lot of ravines and precipices in this forest, so if the robot gets a few meters wrong, it could fall off a cliff. So the GPS by itself is not enough.

We can also learn something about how a robot moves: it knows the commands given to its wheel motors, and it knows that if it’s going in one direction and nothing is in its way, in the next instant it’s very likely to move in the same direction. But, of course, it doesn’t know anything about its movement: it might get blown by the wind, its wheels might get a little boggy or roll over bumps; so the number of wheel revolutions might not accurately reflect the robot’s movement, and that prediction won’t be perfect.

The GPS sensor tells us status information, but only indirectly, with a degree of uncertainty or inaccuracy. Our prediction tells us something about how the robot is moving, but only indirectly, with a fraction of uncertainty or inaccuracy.

But if we use all the information available to us, can we get a more accurate answer than each of the approximations in isolation? Of course, the answer is yes, and that’s what the Kalman filter is for.

The post Explanation of the Kalman filter in pictures appeared first on FMath 22.

]]>
A programmer of our time: whether he is a craftsman or a master https://www.fmath.info/a-programmer-of-our-time-whether-he-is-a-craftsman-or-a-master/ Fri, 05 Mar 2021 20:16:39 +0000 https://wordpress.iqonic.design/epy_wp/?p=1026 You work as a programmer and you write code almost every day. Tell me how often do you feel satisfaction from the work you do and pride in the results of your work?

The post A programmer of our time: whether he is a craftsman or a master appeared first on FMath 22.

]]>

You work as a programmer and you write code almost every day. Tell me how often do you feel satisfaction from the work you do and pride in the results of your work? Have you ever produced working but poor-quality and “ugly” code just to meet deadlines? Are you motivated to write optimal code knowing that it will become irrelevant and useless in a couple of months?

Let’s try to find out how it happened that programming has turned from a beautiful art and creativity into a daily routine conveyor.

Sweatshop conveyor belt

If you work at a commercial company, you are probably familiar with the term “time to market” – the time interval between the emergence of a product idea or functionality and the release of the ready product on the market. Nowadays, everyone is trying to shorten this interval. In commercial development the acceleration of processes rules.

Releases follow one after another, all the participants of software development are always behind the schedule and work overtime. And all this is done with one purpose – to sell the ready product to the customer as soon as possible. There are no more frills in development and no more quality code – the conveyor must not stop.

The product is released on time, possibly even without critical errors: the goal is achieved. But everyone who participated in the development can see perfectly well what is going on behind a beautiful advertising facade. Technical debt is constantly piling up in the system, the amount of “hardcore” is increasing, and all the temporary solutions become permanent. Attempts to fix the situation in any way usually go nowhere: “It’s very good that you noticed this, but there’s no time to fix it now, but maybe we’ll fix it in the future.” Everything stays the same and, based on all this shaky design, the system continues to evolve and new functionality is implemented.

Plastic World

On the other hand, who needs durability and quality now? Most of the code you write will be used in the system for a few months at most and then will be replaced or reworked. What’s the point of writing this code perfectly then? Imagine that you’re plastering a room in a house every day, knowing full well that the house will be torn down in a day. Anyone would give up.

It seems that many companies are simply not profitable to produce a quality product that lasts. If you make a smartphone comfortable and reliable, who among the consumers will want to buy a new one in a year? This is how the phenomenon called “planned obsolescence” came about. We all live in a plastic world of short-lived things.

Recall, for example, how Microsoft accidentally released the fairly tolerable Windows XP. Users liked it so much that they were loath to upgrade to the next version. Then the story was kind of repeated with Windows 7. But Microsoft no longer made such “mistakes” – the transition to the next version of the system became voluntary and compulsory.

Apparently, for these reasons nowadays people think less and less about the beauty and optimality of the released systems. The principle of “Do you want to get by or do you want to get by?” prevails in commercial development.

The Art of Programming

So who is the modern developer: a craftsman with a satisfactory knowledge of the tool, or a master who has reached the pinnacle of his art? This question has been around for years. It seems that in today’s world of development creative craftsmen are not expected at all. I often encounter such an attitude of management: “According to my boss, programming is a craft. And it is useless to argue with him. He thinks that a programmer with a creative approach to business is bad for him and he should not be hired in any case.

Many of the artists are even forced to disguise themselves as artisans in order to survive: otherwise they will not be able to meet the prohibitive deadline.

Of course, in any business, you need not only high-caliber craftsmen. Everywhere there is work for the average worker who completed a six-month programming course on one of the training sites. But the problem is that this steadily reduces the quality of programs. Installing a new version of some mobile app turns into a game of “guess what else they’ve managed to break.” Unless there is a knowledgeable, educated engineer on the construction site, even the most diligent and disciplined workers can build a good, reliable house except by accident.

Donald Knuth called his book The Art of Programming. The first edition of the book was published back in 1968. Back then, programming was still an art. It is a pity that the profession of programming is gradually becoming more and more pragmatic and down-to-earth. But at least in our own projects we can always engage in free creativity and not play by corporate rules.

The post A programmer of our time: whether he is a craftsman or a master appeared first on FMath 22.

]]>
Linear Algebra for Satellite Interferometry https://www.fmath.info/linear-algebra-for-satellite-interferometry/ Mon, 15 Feb 2021 11:17:34 +0000 https://www.fmath.info/?p=7887 The basic idea of satellite differential interferometry is quite obvious-the difference between the two phase images is calculated and recalculated into the offset in the direction of the satellite's line of sight using geometric constructions for a radar with a known wavelength.

The post Linear Algebra for Satellite Interferometry appeared first on FMath 22.

]]>

As previously mentioned, there are different techniques for interferometric analysis and the most automated methods are also the most complex, to the point that their use is avoided for lack of opportunity to understand the calculations performed. The basic idea of satellite differential interferometry is quite obvious-the difference between the two phase images is calculated and recalculated into the offset in the direction of the satellite’s line of sight using geometric constructions for a radar with a known wavelength. If it were not for the need to account for the Doppler effect, the extent of the imaging area (scene), and the terrain scanning method used, this task would be at the level of school arithmetic. But the subsequent processing of individual interferograms and their series requires much more complex mathematics and calculations. This is the part of this work we are going to talk about.

Creating a system of equations

An unwrapped interferogram is the displacement of all spatially coinciding pixels of the images during the time interval between the pair of images used to construct this interferogram. Of course, the displacement can be calculated only for those pixels that are represented in both images of the pair. Moreover, the offset is calculated with some error, which is proportional to the measure of interferogram coherence (set in the range from zero to one). Moreover, even 100% coherence does not guarantee 100% accurate displacement measurements – a typical problem is the change in the optical properties of the atmosphere during the time between images, which leads to a change in the time for the radar beam to pass through the atmosphere, and in the calculations we recalculate this as a false surface displacement, since these changes in the atmosphere properties are not known to us.

In general, to construct an SBAS (Small BAseline Subset) diagram, a perpendicular baseline (measured from the perpendicular between the positions of the satellite at the moments of imaging) and a temporal baseline (time interval between imaging) for pairs of images to obtain potentially high-quality interferograms are constrained.

Things are more interesting with the time constraint. Indeed, sometimes we can get reliable results (high coherence) even with an inter-image interval of about a year, but only for individual image pixels. Thus, by increasing the permissible interval between images, we significantly increase the number of obtained interferograms, while the overwhelming part of their constituent pixels will be unsuitable for further processing due to the loss of coherence. But in this way, we have an opportunity to analyze atmospheric changes over a long time interval and to make appropriate corrections for all the calculated displacements. Since atmospheric changes are rather smooth relative to the spatial detail of the analysis (scale of the order of tens of kilometers), the obtained corrections are suitable for a large territory around each pixel with temporal coherence in months. At the same time, it is usually sufficient to limit the time interval between images to 50 days or so in order to determine exactly the surface displacements. In a notebook on GitHub, I showed examples of calculating errors caused by atmospheric effects, and how simply excluding the corresponding images can greatly improve the results: S1A_Stack_CPGF_T173_TODO.ipynb I note that the calculations here are done for whole interferograms, so this notebook can also be run on Google Colab resources, while the pixel-by-pixel calculations are too resource intensive.

Now consider building a system of equations for all obtained interferograms. Choosing as a unit of time the interval between two consecutive images (12 days), for each pixel of the study area we can write a series of equations showing the shift over a series of time intervals with a given confidence (determined by coherence). Thus, the coherence determines the weighting factor of each equation in the system, namely the whole equation. This will become clearer if we consider the boundary case at zero fidelity (weight coefficient), when the equation is insoluble at all and must be eliminated. At maximum credibility, the equation should be considered with maximum weight in solving the system of equations. To solve a system of equations using the least squares method (LSM), the square roots of the resulting weights must be used to normalize the system of equations (due to the fact that the weight of each equation determines its importance in the solution, and the solution method itself operates on quadratic values).

 
 

 

 

The post Linear Algebra for Satellite Interferometry appeared first on FMath 22.

]]>
How to easily parse an algebraic expression https://www.fmath.info/how-to-easily-parse-an-algebraic-expression/ Mon, 04 Jan 2021 19:11:23 +0000 https://wordpress.iqonic.design/epy_wp/?p=2478 The purpose of this article is to show how to solve an algebraic expression as a string, by converting it from infix to postfix form and parsing the transformed string.

The post How to easily parse an algebraic expression appeared first on FMath 22.

]]>

The purpose of this article is to show how to solve an algebraic expression as a string, by converting it from infix to postfix form and parsing the transformed string.

It is recommended that you read the following before reading it:

Prefix, infix, and postfix forms

The infix form is the most common form, because it is easier to represent. It is an expression where the operators are placed between the operands. This is where the name of the form comes from.

The prefix form, on the other hand, is an expression in which the operators are in front of the operands.

Correspondingly, the postfix form is an expression in which the operators are after the operands.

To calculate an expression written in the infix form, we need to analyze it beforehand, taking into account the precedence of operators and parentheses. Prefix and postfix forms, on the other hand, require no such analysis, since operators are written in the order they are evaluated and without parentheses.

Expressions written in the prefix or postfix form are also called parentless or Polish. They are called Polish after their author, the Polish mathematician Jan Lukasiewicz.

You can read more about the presented forms of recording algebraic expressions on Wikipedia.

Dijkstra’s algorithm.

We will use Edsger Wiebe Deikstra’s improved Deikstra algorithm to convert to postfix form.

The principle of Dijkstra’s algorithm:

  1. We go through the original string;
  2. When we find a number, we put it in the output string;
  3. When we find an operator, we put it on the stack;
  4. Push all operators that have a higher priority than the one in question from the stack to the output string;
  5. If we find an opening bracket, we put it on the stack;
  6. When a closing bracket is found, we push all operators before the opening bracket from the stack, and delete the opening bracket from the stack.

Implementation of Dijkstra’s algorithm

We implement the Mather class, where we define private fields infixExpr for storage of infix expression, postfixExpr for postfix expression and operationPriority, where we define the list of all operators and their priority.

In the operationPriority field the bracket (‘(‘) is defined only to avoid parsing errors later on, while the tilde (‘~’) is added to simplify further parsing and represents a unary minus.

Let’s add a private method GetStringNumber, designed for parsing integer values.

Next, create a ToPostfix method , which will convert to a reverse Polish (postfix) entry:

Algorithm for calculating a postfix record

After getting a postfix record, we need to calculate its value. To do that, we will use an algorithm that is very similar to the previous algorithm, but this one uses only one stack.

Let’s analyze how this algorithm works:

  1. We go through the postfix notation;
  2. When we find a number, we parse it and put it on the stack;
  3. When finding a binary operator, we take the last two values from the stack in reverse order;
  4. When finding a unary operator, in this case unary minus, we take the last value from the stack and subtract it from zero, since unary minus is a right-hand operator;
  5. The last value, after working through the algorithm, is the solution to the expression.

Implementation of the algorithm for computing the postfix entry

Let’s create a private method Execute, which will perform the operations corresponding to the operator and return the result.

Now let’s implement the algorithm itself by creating a Calc method in which we define the following.

Testing algorithms

Let’s try running the expression 15/(7-(1+1))*3-(2+(1+1))*15/(7-(200+1))3-(2+(1+1))(15/(7-(1+1))*3-(2+(1+1))+15/(7-(1+1))*3-(2+(1+1)) through the composed algorithm and see if it works correctly.

Although the algorithms implemented here work, they do not take into account spaces between characters, fractional values, check for zero in the division operation, functions are not implemented, etc., so this code is provided only as an example.

 

The post How to easily parse an algebraic expression appeared first on FMath 22.

]]>