Memory optimization with multidimensional arrays in Java

Yesterday I hit a memory limit while submitting a problem solution to an online judge. All because I used a wrong declaration of a multidimensional array. In case of C or C++ language this would not be a problem, but due to the fact that in Java everything is an object (despite primitive types), we should calculate allocated memory differently for multidimensional arrays.

For instance, in C++ the size of this array is 96 bytes (assuming we have 4-byte int):
int arr[2][3][4];
2 * 3 * 4 = 24. Plus 4 bytes for each due to int type and we get total of 24 * 4 = 96.
If we declare the array like this, result would be the same: int arr[4][3][2]

Let’s compare these two declarations in Java:
int arr[][][] = new int[40000][200][2];
int arr[][][] = new int[2][200][40000];

There is not much different from the data storage viewpoint, however the first one requires much more memory. Here is why:
1. Array in Java is an object itself. Every object (not near the reference, but in the heap) holds few additional bytes for the object header information. This header contains data necessary for the VM, which later uses it for the garbage collector or some other purposes. As far as I know, generally it takes 8 bytes in case of a 32-bit machine and 16 bytes in case of a 64-bit one. Apart from this, an object holds information about the array size – that’s additional 4 bytes. Also padding in memory might take few more bytes. So, I won’t start precise calculations and let’s assume that each array object (excluding array elements) takes X bytes.

2. int[a][b] – In Java language this is ‘a’ number of arrays which contain ‘b’ number of elements. I.e. we have a+1 number of objects instead of just one.
int[a][b][k] In case of a three-dimensional one a * b + a + 1 number of object, etc.

Now let’s calculate and compare sizes of these arrays:
(40,000 * 200 + 40,000 + 1) * X + (40,000 * 200 * 2 * 4)
(2 * 200 + 2 + 1) * X + (40,000 * 200 * 2 * 4)

Clearly, the second part, which calculates the total size of elements based on int type, will be the same. But according to the first part we will have 8,039,598 excessive objects in case of the first array, consequently it will take considerable larger memory.

By the way, I cannot see if this number is real with profiler, do you have any idea how to check this?

Dynamic Array

Dynamic array is a data structure with variable length. One can insert, retrieve or delete an element using random access – i.e. it needs a fixed time to reach any element and this does not depend on whole size of the array.

You might come across this structure by different names – dynamic array, growable array, resizable array, dynamic table, mutable array, array list.

To guarantee random access, it is necessary to allocate a continuous memory for the array, i.e. its elements should be stored next to each other.

In case of a static array, this is not a problem, as we define the length in advance. But sometimes we don’t yet know the length. So, how much continuous memory should we allocate?

Clearly, there is no point for allocating huge memory just in case. We should not reserve million bytes in advance just to store 20 elements.

This is where dynamic array comes in. In many programming languages this problem is solved this way:
The dynamic array is backed by a static array, which is created in small size. When this static array is filled and users will try to add more elements, a larger static array will be created behind the scenes. Existing elements will be copied into it and the old one will be deleted. Consequently, insertion of some elements in the dynamic array will take more time than the others.

With this solution in mind, we should answer to an important question: By what factor should we increase the static array length?

It should be noted, that again we are searching for a balance between performance and memory waste. If we increase the array length with only one element, rewriting whole array on each new element will take too much time. But if we increase by a large factor, we might end up with a large empty array.

Optimal number is 2. This number might be slightly altered corresponding to requirements.

You might find its different versions in various programming languages – e.g. Vector in C++ is increased 2 times. Vector from Java standard library also has a factor 2, however you can change it by passing arguments. A static array of ArrayList in Java is increased by 3/2 times. A static array of HashMap – by 2 times. In the C implementation of Python, the number is a bit odd – approximately 9/8. Here is the source.. And here is an explanation.

If the programmer knows approximate size of an array in advance, they can configure the dynamic array correspondingly. E.g. Vector in C++ has a function reserve, which will reserve memory of given size.

ArrayList and HashMap classes in Java have a constructor parameter initialCapacity. In case of HashMap, not only static array is rewritten in the background, but the hashes are also regenerated.

If performance is critical, this parameter can be used. I carried out several experiments and saw the difference, however, in ordinary tasks this difference is not noticeable. Even in case of factor 2 and million elements, the arrays are rewritten only for 20 times.

In the beginning, I mentioned that some element insertion might take time from O(1) to O(n), where n is a total number of elements. Despite of this and based on amortized analysis, insertion time in a dynamic array is defined as O(1).

The idea of amortized analysis is to consider both, slow and fast operations of the algorithm. They might balance each other. While estimating an algorithm performance, we generally reach for the worst case scenario, but sometimes it is possible to calculate the ratio of expensive operations.

Let’s calculate the time for filling a dynamic array of n elements:
If we increase the length of an array by 2 times, we can estimate the number of rewrite like this:

Let’s start from the end. In the end it will need to rewrite all elements. Before that, only half of elements. Before that, quarter of elements, etc.
n + n/2 + n/4 + n/8 + … = n (1 + 1/2 + 1/4 + 1/8 + …) = 2n

Also, let’s add an insertion time of a new element and we get 3n. If we take an average, we will get O(3) = O(1) time for inserting an element.

Case sensitivity in MySQL

Recently I wasted so much time on one problem, that it stayed in my mind. Actually, there was a fault of an inaccurate logging, but this is not the main issue right now. Nearly all the things that we describe in MySQL, are case insensitive. Namely, columns, functions, procedures, names and aliases are not case-sensitive. Trigger is an exception.

But as far as the database and table names are concerned, their case sensitivity depends on the OS.

In MySQL a database corresponds to a folder and a table to at least one file in the database folder. This means that in Windows, case in these names does not matter, whereas on Unix platform, it does.

We could use the MySQL parameter lower_case_table_names, but this one converts everything to lower case and if we are trying to migrate a database, we will have to convert existing database and table names preliminary.

Trip to Google

Two weeks ago I found myself visiting Google 🙂 They have an office in Krakow, Poland. Five technical interviews took five hours and it turned out quite interesting for me. It’s exciting when you are tried in knowing stuff. However, I have already received their decision, so I flew back and continued my work in Tbilisi 🙂

Even more exciting was spending a week with old friends and now I will not miss them for some time 😀

I’m not good at sharing experience, so here are some photos..

google

google

teo_rachvela_lekva

teo_rachvela_lekva

lekva_elle_rachvela