Home Blog Page 4

Android RNE App

1

Spanish Radio and TV (RTVE) is a government funded entity that since 1956 has been broadcasting several TV and Radio channels.

Living abroad from home is thought but the radio programs from RTVE are a great companion for those missing home moments. I use the official app from Google Play Store, but unfortunately, the quality of the app is nowhere close to that of the content, so I decided to build my own.

https://play.google.com/store/apps/details?id=pandatech.rneunofficial

My app is not based on a web view as the official one, but rather on native Android elements that create a better user experience and are more visually appealing. Also, it plays audio as per the official Android documentation, i.e. requesting Audio focus before playing, and pausing if things like an alarm or a phone call come through.

I am planning to add more features in the next few days, ideally, the podcasts, and not just the live audio. Finally, and probably best, no adds!

Docker Image – Production Django on Alpine

0

Django works great when you are running in local, but the moment you decide to move onto production, the headaches begin…

Since a few months ago I deploy everything using Docker, the result is something much cleaner and easier to maintain, so when moving my Django app to production, I choose Docker for it. As expected someone had already taken the time to build a Docker image that using ubuntu as is base image will run your Django site with uWGSI&Nginx, a marriage made in heaven.

My problem with that image is that it uses Ubuntu, don’t get me wrong, I love Ubuntu for something like my laptop, but for a Django app in an EC2 instance, it just feels too heavy. So I created an equivalent Dockerfile, this time with the much lighter Alpine.

The Dockerfile is available in Github with instructions on how to deploy your app.
Enjoy!

Redshift UDF Phone Number to Country

1

Redshift’s UDFs (User Defined Functions) permit to execute, with some limitations, certain Python libraries and custom code.

In my case, I wanted to find a way to extract the country code from a phone number in E.164 format. UDFs are a perfect fit for this, the implementation in SQL would most certainly require creating custom views, and hacking your way around, while in python, we can use the library phone-iso3166

>>> from phone_iso3166.country import *
>>> phone_country('+1 202-111-1111')
'US'
>>> phone_country('+34645678901')
'ES'

To upload a library to Redshift, we first need to check it follows the structure:

directory/
    __init__.py
    extra_files.py
    subdirectory/
        __init__.py
        other_files.py

In our case, phone-iso3166 is already in that structure. Now we need to zip the library:

tar -xvzf phone-iso3166.tar.gz
zip -r phone-iso3166.zip phone-iso3166

With our zipped library, we need to upload it to S3. I did this part manually into a bucket named s3://redshift/custom-udfs/

Now, connect to Redshift and issue:

CREATE LIBRARY phone_iso3166 LANGUAGE plpythonu 
FROM 's3://redshift/custom-udfs/phone_iso3166.zip' 
CREDENTIALS 'aws_access_key_id=<your-aws-key>;aws_secret_access_key=<your-aws-pass>' 
region as '<your-region>';

Last step:

CREATE OR REPLACE FUNCTION udf_phone_country (phone_number VARCHAR(64)) RETURNS VARCHAR(64) IMMUTABLE as $$ 
from phone_iso3166.country import phone_country, InvalidPhone
try:
    return phone_country(phone_number)
except:
    return None
$$ LANGUAGE plpythonu;

You should be all set to use this in your queries:

SELECT 
analytics.udf_phone_country('+14151111975'),
analytics.udf_phone_country('+7652112311'),
analytics.udf_phone_country('+34626472918')

That returns:

"US","RU","ES"

 

DD-WRT Remote SSH Access behind VPN

1

SSH access doesn’t work when OpenVPN client is enabled on DD-WRT.
Packages do arrive at the router if you try to SSH against the WAN IP, however, because all OUTPUT  traffic is diverted through the VPN (interface tun0) SSH won’t succeed.

What’s missing is an OUTPUT rule on iptables to route traffic on port 22 through the vlan2 interface (that’s the interface connected directly to the internet).

First, create table 202 via the Gateway Ip on the Interface VLAN2:

$ ip route add default via $(nvram get wan_gateway) dev vlan2 table 202

Then apply the rule on table 202 to packages marked with 22.

$ ip rule add fwmark 22 table 202

Finally, tag with 22 every output package on port 22 not coming from any machine on the local network.

$ iptables -t mangle -I OUTPUT -p tcp --sport 22 -d ! 192.168.1.0/24 -j MARK --set-mark 22

Note that the last command skips packages from the local network in my case 192.168.1.0/24, reason being that when SSHing from a host in local, the packages should be routed through br0 and not vlan2.

First issue these commands in the command line of your router to ensure they work with you, if somehow they break your routing, a restart will clear them. Once you have made sure they work, you can add them to the firewall script of your router. Note also that some DDWRT versions won’t apply the iptables rules until all services are restarted.

 

Note that my config IP and port is different because I am not using the default values.

 

For reference, this is how my Firewall section script looks like:

# Create a rule to skip the VPN
echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter  
iptables -t mangle -F PREROUTING  
ip route add default table 200 via $(nvram get wan_gateway)  
ip rule add fwmark 1 table 200  
ip route flush cache

## SSH to decice (port 12601)
# First the port forwarding part
iptables -t nat -I PREROUTING -p tcp --dport 12601 -j DNAT --to 192.168.1.132:12601
iptables -I FORWARD -i vlan2 -d 192.168.1.132 -p tcp --dport 12601 -j ACCEPT
# Now mark packages from RPI and source port 12601 with tag 1. The rule above will direct packages marked with 1 through the wan gateway
iptables -t mangle -I PREROUTING -i br0 -p tcp -s 192.168.1.132 --sport 12601 -j MARK --set-mark 1

 

 

Redshift Force Drop User

Ever tried dropping a user in Redshift only to discover that user “user_1” cannot be dropped because the user has a privilege on some object.

That’s not of great help Redshift. Luckily for us, there is a solution. Kamlesh Gallani posted a response about revoking permissions for tables and schemas that I user might still be assigned to, along with changing the ownership of tables that the user might have created. After that, dropping the user is straightforward.

I created a python snippet with his response that might save you a few minutes. You will need the view v_get_obj_priv_by_user from amazon-utils, simply create the view on your Redshift cluster, then, copy and paste this Python script into your favorite editor (e.g. Sublime Text) fill in the connection information and the list of users to drop and execute it.

import psycopg2
 
# Connect to Redshift
conn = psycopg2.connect(host="", dbname="", user="", password="", port="5439")
conn.autocommit = True
cursor = conn.cursor()
 
# List of users to drop
users = ['user_to_drop_1', 'user_to_drop_2']
 
# New owner, used when changing ownership
new_owner = 'new_user'
 
# Templates to change ownership, revoke permissions, and drop users
change_ownership = "select 'alter table '+schemaname+'.'+tablename+' owner to %s;' from pg_tables where tableowner like '%s'"
revoke_schema_permissions = "select distinct 'revoke all on schema '+schemaname+' from %s;' from admin.v_get_obj_priv_by_user where usename like '%s'"
revoke_table_permissions = "select distinct 'revoke all on all tables in schema '+schemaname+' from %s;' from admin.v_get_obj_priv_by_user where usename like '%s'"
drop_user = "drop user %s;"
 
for user in users:
    # Change ownership
    cursor.execute(change_ownership % (new_owner, user))
    for r in cursor.fetchall():
        print("Executing: %s" % r[0])
        cursor.execute(r[0])
    # Revoke schema permissions
    cursor.execute(revoke_schema_permissions % (user, user))
    for r in cursor.fetchall():
        print("Executing: %s" % r[0])
        cursor.execute(r[0])
    # Revoke table permissions
    cursor.execute(revoke_table_permissions % (user, user))
    for r in cursor.fetchall():
        print("Executing: %s" % r[0])
        cursor.execute(r[0])
    # Drop user
    cursor.execute(drop_user % (user))

Miele W 1065 – Wiring for 110 Volts

1

I got yesterday one of these Miele beauties. She is over 20 years old, and my guess is that she weights around 250 pounds. Yes, it was though carrying it downstairs from is previous owner to home with just 2 guys. 

When I brought her home, I saw that it required 220V which it’s a problem where I live, BUT if you are willing to sacrifice the second heater,  you can adapt can change the wires to make it work with the regular 110V.

Miele W 1065 Back
Miele W 1065 Back

First and foremost unplug the washer, unscrew the white plastic box and that will expose the connections box. You will see we have Ground, Neutral, phase 2 and phase 1 when reading from right to left. The washer will work for with just having phase 2 connected to 110V, but as said above, the heater won’t work, so you are stuck with washing with cold water. Which anyways is the best way to preserve your clothes.

I bought a piece of cord and cut it to expose the wiring, then just connect ground to ground, neutral to neutral and phase 2 to the remaining. I used a multimeter to make sure the wiring was correct.

Try this at your own risk. Mine is working like a champ.

WordPress HTTP Error when uploading files

0

Today I got this error when uploading a video to my WordPress site. I Googled around for a little bit, but the proposed solution wasn’t working.

It turns out that since I am using WordPress docker and a Nginx proxy, this later was complaining that the file was too large 413 Request Entity Too Large when the file was uploaded using the browser uploader, not the WordPress multi-file uploader.

Wordpress Browser Uploader
WordPress Browser Uploader

The solution was easy:
$ vim nginx.conf

http {
...
# Added because uploading large files to WordPress is throwing HTTP error.
client_max_body_size 64M;
...

In my case since I want my setting to take place for all the websites I am hosting with Nginx I applied the setting under the http section.

Enjoy

SET Game Visual Solver

1

For those of you that have never played SET® before, SET® is a fast-paced game involving 81 unique cards with different combinations of four characteristics (color [red, green, or purple], shape [diamond, oval, or squiggle], shading [hollow, shaded, or filled], and number [1, 2, or 3]). The point of the game is to find as many 3-card “sets” as possible. If the three cards are either all the same or all different in each of the four characteristics, you know you’ve found a set. Twelve cards are presented on the table at a time, and are continually replenished when a set is taken out, until no cards remain in the deck. Sometimes, however, there are no sets in the twelve cards, in which case 3 more cards are dealt. But, how do you know if there really is no set? I aim to solve this problem using image processing software and eventually create a set recognition mobile application.

So, what determines a set? Let’s take the example below (although it is probably one of the most difficult sets in the game). Looking at the cards in terms of the four characteristics, we have the following: color – red, green, and purple (all different); shape – diamond, oval, and squiggle (all different); shading – filled, hollow, shaded (all different); number – 1, 2, 3 (all different). If the cards were exactly the same, but all red, this would also be considered a set, because, in terms of color, they are all the same.

 

There’s the simplified breakdown of how I used an image processing algorithm to determine sets from ordinary jpeg images:

Step 1: Convert the image to black and white and invert the image to calculate the number of shapes per card

Step 2: Compare the pixel surface area of one of the symbols on each card to determine the shape

Step 3: Normalize the RGB color of the images with an equation to figure out the color

Step 4: Convert the images to gray scale and apply a border detection to determine the shading

Step 5: Run the simple algorithm that deduces the sets

Introduction

The Set deck contains 81 unique cards, unlike a normal deck which contains only 52. Furthermore, there are more features to detect in Set (color, shape, amount, and shading) than in regular playing cards (number and suit). Although it is true that regular playing cards have color features (red and black), this information is redundant because the suit of a card reveals the color (i.e. hearts and diamonds are red; clubs and spades are black). For this reason, the regular playing card recognition systems use a gray-scale image to simplify the process. This type of simplification, however, is not possible in Set , as the color information is required. The solution to the problem of identifying all four features in Set , including both color and shading, could be used to build a mobile application that could solve the game from a single image. Furthermore, the same principles used to recognize shading and color could be applied to other systems to ensure a more reliable analysis. In playing card games like blackjack, for example, in the case of overlapping cards (in which the suit

of the only revealed corner of a card cannot be recognized with 100% accuracy), taking the color into account might enable suit identification by using the process of elimination (i.e. a large rounded edge can only be a heart or a spade, which is either red or black)

In this way, the identification of color and shading features could add robustness to both regular card game recognition systems as well as other non-card-related recognition systems (i.e. to identify items in a factory, suitcases in an airport, etc.).

MATERIALS & METHODS

<pBelow, we will first briefly explain how the game Set is played. Then, we will mention our motivation for conducting this particular image processing study using Set cards and our specific research goals. Finally, we will detail how the algorithm identifies not only the card borders but also the specific features of the cards

A. The Game

Set is a card game consisting of 81 cards, each of them having 4 features with three different possibilities. Each com-bination of features appears once and only once in the deck (34 = 81).

  • Color: Red, Green, or Purple
  • Amount: 1, 2, or 3
  • Shape: Rhombus,  Squiggly, or Oval
  • Shading: Hollow, Striped, or Solid
  • They are all the same color, or each of them are three different colors
  • They all have the same amount, or they each have three different amounts
  • They are all the same shape, or each of them are three different shapes
  • They all have the same shading, or they each have three different shadings

When a player finds a Set, he/she collects the three cards that make up the Set, adding them to his/her pile of Sets

In an attempt to use image recognition technology to analyze photos of Set cards, we simulated a game of Set, pausing to take pictures whenever cards were replaced. These pictures were then individually analyzed using Matlab’s image processing toolbox to identify the specific features of the cards. The goal of this study consisted in identifying any and all Sets in a given group of cards (in this case twelve), with the hope of eventually using the same technology to create a mobile “Set-recognizing” application

The experimental set-up consisted of a cell-phone with a camera (an iPhone 4, which has a resolution of 5 Megapixels) situated 50 cm above the ground and completely horizontal thanks to a support system (Figure 2). The cards were placed on a paper sheet, on which 12 parcels of 8 cm x 5 cm had been previously drawn. Each card was placed into one parcel.

All pictures were taken in the proximity of a window with natural light, and to compensate for the fact that the light only came from one side, a lamp was placed and lit on the opposite side. For the same reason, all pictures were taken in a short period of time to ensure no changes in the natural lighting condition.

A total of 25 photographs were taken for analysis. The selection and removal process of cards for each photograph was similar to that of a real game. More specifically, to begin the simulation, we dealt out 12 random cards from the top of the shuffled deck. After taking a picture of this initial set-up, we manually found and removed one Set. These cards were replaced with three new ones from the deck, and another picture was taken. This process continued until no cards remained in the deck.

It is important to mention, however, that once, no sets could be found. In this instance, because we had limited the paper stencil to 12 parcels, we could not add three more cards like in a real game. Thus, we randomly selected three of the laid-out cards and replaced them with three new cards from the deck to continue the simulation.

Once all 25 photographs were taken, they were cropped to eliminate unnecessary components for the analysis (in this case, the background of the floor). Before analyzing the features of the cards, we created an algorithm to perform some basic transformations of the images to ensure that the patterns would be recognizable. These basic transformations include trimming the image, re-sizing it to a smaller size, and transforming it into a black and white picture, as color is not necessary for the identification of certain features.

The main purpose of this operation was to segment the original picture into 12 fragments, each containing only one card. Not only did this make feature classification easier, but it also allowed the algorithm to run in parallel, which ensured an optimized output in terms of speed in case more than one execution thread is available, as then cards can be processed individually in different threads speeding up the process. Because the paper stencil maintained card location, the coordinates used to cut the image were randomly extracted from one of the 25 pictures and maintained throughout the rest of the image analysis. Obviously, this solution would not be possible with a mobile application, but a similar alignment could be maintained on a mobile implementation with grids projected on the screen during picture-taking to communicate necessary adjustments. In this manner, the space between cards, as well as the distance of the camera from the cards, would remain rather uniform. In case the picture was not taken horizontally, we could use the information about the phone’s angle (extracted from the accelerometer) to correct the perspective.

After performing these basic transformations, we decided to address the problem of identifying card features one at a time. More specifically, we adjusted the algorithm to detect the four different features of the cards in the following order: amount, shape, color, and shading

E. Detecting Amount

In order to detect the amount feature, cards were first converted into black and white images. The images were then inverted and all enclosed figures were filled with white. Finally, the noise and artifacts such as shadows were removed, as can be seen in the bottom row of Figure 4. With this image, we applied the function regionprops from MatLab’s Image Toolbox, which returns a data struct containing statistical information about objects, to detect the number of shapes in each card. According to this function, the number of objects in each card is simply the length of the returned data struct. An object by this function is defined as an association of pixels different from those connected in the background.

F. Detecting Shape

The same inverted black and white image that had been treated for noise to recognize objects for amount detection was also used for shape detection. In this case, however, instead of further adapting the images, we had to further analyze them. The key to detecting shape is the well-differentiated surface area between the three possible objects (diamond, oval, and squiggly). The surface area can be calculated by simply counting the non-background pixels for one of the objects in the image. The same function regionprops used previously for amount detection also provides information about the area, so we could directly use this information to classify the cards. We also tried using other features such us eccentricity and perimeter to classify shape, but they were not successful because at least two of the three possible figures for each of those features had a similar magnitude in the selected feature space.

The parameter adjustments were experimented with using only one image for each shape until no errors were observed. Only then were they tested on and applied to the remaining images in the database.

F. Detecting Color

The most difficult feature to measure due to the limited capabilities of the smart-phone camera was the color. Different approaches were tested with varying results. First, we tried using the smart-phone’s flash to take the pictures, but this produced not only saturated images but also images with undesirable reflections. Then, we applied the CIELAB color space, which partially removes the impact that lighting has on pictures, approximating the human vision system. This approach offered acceptable results but mis-classified a number of the cards, so it was abandoned. Ultimately, we normalized the RGB color, which offered the best performance. In order to normalize the RGB color of the figures, we used the following equations:

An untouched image and its respective image after the RGB normalization process can be seen in Figures 5 & 6, respectively. In the normalized images, threshold values were set for the RGB components, so that the features would be forced into one of the three possible groups (red, green, or purple)

F. Detecting Shadingh5>
In order to detect the last feature, the filling of the card, we adapted gray-scale images of the cards. Similar to the color detection algorithm, a number of different approaches were attempted before finding the optimized solution, which is based on edge detection.

The difference between a solid, striped, and hollow figure is the number of borders, which can be detected by inverting the image. More specifically, a solid figure has only one border, a hollow one has two borders (one on the outside and another on the inside), and a striped one has many borders. In order to detect the edges of the figures, we performed the MatLab function edge , using the roberts method with a threshold value of 0.05 . Gray-scale images of three figures with distinct shading as well as their respective inverted border-detection images after applying the MatLab function can be seen in Figure 7

Finding “Sets”

Once the four features of the cards are successfully de- tected, the implementation of the algorithm to detect any and all Sets is quite straight-forward. The code, which is based on the definition of a Set, is quite simple and requires no further comment. The algorithm runs through each image of the game individually (each with 12 different cards laid out), applying all of the previously mentioned image adaptions to detect the four features and define the cards based on these. Then, the algorithm detects all possible Sets, assigning one of eight symbols (+, -, *, S, ?, 0, X, #) to any three cards that form a Set. An example of a solved image (with all identified Sets marked with matching symbols at the top of the cards) can be seen in Figure 8.

In this section the results of the system are analyzed from two points of view: 1. the number of correct classifications that the system achieved, and 2. the improvement in execution time when the algorithm was run in parallel. A. Classification Accuracy After processing all cards in the database (300) and de- tecting all Sets, we compared the results of our algorithm with those of an experienced human player. With this blind comparison, our algorithm did not perform one single error in terms of identifying Sets. Therefore, we can conclude the success of our detection algorithm. B. Parallel Computing Improvement Using MatLab’s Parallel Computing toolbox (more specif- ically, the function parfor that runs a for loop in the available threads), the algorithm performed approximately 3.5 times faster. We measured the classification time for five different images with the function cputime , which minimizes the effect of the operative system and other types of interference. The mean, maximum, and minimum time for these five classifica- tions using both serial and parallel computation are displayed in Figure 9 In this work a complete system able to detect features on cards, and effectively “play” a card game by finding Sets was implemented. This shows the feasibility of developing mobile applications that could solve logical puzzles (cards, chess, etc.) relying on a limited quality photo. Such systems would also be interesting for security and surveillance purposes. For example, casinos, which invest large amounts of money in detecting cheaters (players that are counting cards, etc.), could use a system such as the one presented here to reliably detect the cards on the table, as well as the chips with slight modifications to the code. The system could also incorporate an algorithm to automatically indicate when cheating patterns are detected (for example, increasing the bet in blackjack as the card count goes up) and set off an alarm. The system reached the maximum performance for the available data set, suggesting that certain aspects could be pushed more towards the limits. Orientation of the cards seems an obvious option but this modification is not likely to produce different results as long as the cards remains in their parcels. One aspect could be the light of the scene, and further studies should investigate on this aspect Because of the poor representation quality of the smart- phone’s CCD, the least robust detectors in the system are probably the color and shading ones, which both rely on components of the RGB image. Little can be done to change this limitation, as mobile phones have already been built to rely on these RGB components. We came up with a number of potential solutions as to how to neutralize this limitation and correct the colors on the smart-phone image. First, we tried to implement the use of the flash of the device. Unfortunately, however, upon implementing this solution, we realized that the saturation of the cards reflect light, making image recognition quite difficult on the parts of the image with glare. Moreover, thinking about a casino implementation, this solution of using a flash camera is unlikely, as constant flashing at the tables could bother/distract clients. The second solution we came up with was to use a different color space, such as Lab , that does not modify a and b components in different lighting conditions. This option, however, did not work as well as expected. For this reason, we decided to implement our third solution: normalizing the colors (as explained in equations 1, 2 & 3 above) and adjusting the parameters for our particular lighting condition, tweaking them until they allowed for the biggest possible margins without producing any errors. Regarding the segmentation of the image, it could be argued that we only have the information for the segmentation when working offline and with pictures taken in the same conditions. However, it should be pointed out that the position of the cards is not always the same: the only condition is that only one card is placed in each of the boxes, and these boxes could be made as big as needed. On a smart-phone application, when it comes to taking the picture, a grid-like pattern could appear over the image that is going to be taken, to ensure that only one card fits into each of the boxes formed by the lines. Similarly, in a casino, the dealer could be trained to place the cards in the same place every time to respect the segmentation margins. These two solutions would have the same results as our paper stencil in terms of maintaining card separation and size for segmentation.

Conclusion

This work shows the feasibility of an inexpensive system based on a smart-phone camera to solve the Set card game. With simple image processing techniques we were able to achieve a classification accuracy performance of 100%, sug- gesting that the presented work could lead to a system to analyze different card games such as poker or blackjack

For more details, check out the paper I wrote.

BlackJack Hitting vs. Standing chances

1

Who has never thought about beating the house in a Casino? As the MIT students said in their documentary Breaking Vegas, it’s not all about the money; but about the feeling of beating a huge corporation in their own game.

It’s very easy to find online the basic strategy for BlackJack, i.e., the correct decision when the only information that you have about the game are your cards, and the dealers facing card. Understanding by correct decision, the one that would minimize your losing chances. Even with this basic strategy the odds against the house are negative, but greatly reduced to a mere -3 $ per 100$ played (aprox). Of course the Breaking Vegas students didn’t just played basic strategy, they also counted cards and played in a team not to raise any attention from the Big Brother.

For this post, I wrote a small Python notebook that computes what are the chances for the different combinations of cards and strategies in their most simplified way, standing and hitting.

The full notebook: https://github.com/Koff/blackjack/blob/master/blackjack_simulation.ipynb

And the results in two images:

 

On the Y-Axis, the starting score of the player, and on the X-Axis, the dealer’s facing up card (11 being the Ace). The color and values represent the chances of winning if the following the strategy.

Baldur’s Gate II Text Issue on Android

0

I recently purchased Baldur’s Gate II on Google Play Store, but just after a few minutes of playing, I noticed how small the UI elements and texts were displayed on my LG G3.

Furthermore turning on the setting UI Scaling doesn’t change anything for screens above Full-HD, which is the case of the LG G3.
I decided to then open the file Baldur.ini located in:

/sdcard/Android/data/com.beamdog.baldursgatellenhancededition/files

With Root Explorer (you don’t need a rooted device). Then go to the lines 75 & 76 and change them from:

'Fonts', 'Zoom', '148',
'Fonts', 'Size', '3',

to:

'Fonts', 'Zoom', '248',
'Fonts', 'Size', '6',

If you would also like to increase the upper limit for the FPS by default capped at 30 fps, go to line 70 and change it, note that this can drain your battery:

'Program Options', 'Maximum Frame Rate', '30',

to:

'Program Options', 'Maximum Frame Rate', '60',

My final Baldurs.ini looks like this:

CREATE TABLE options (
section string,
name string,
value string
);
INSERT INTO options ROWS (
'Fonts', 'ko_KR', 'UNBOM',
'Fonts', 'zh_CN', 'SIMSUN',
'Fonts', 'ja_JP', 'MSGOTHIC',
'Fonts', 'ru_RU', 'PERMIAN',
'Fonts', 'uk_UA', 'PERMIAN',
'Graphics', 'version', 'OpenGL ES 3.0 V@84.0 AU@ (CL@) - build 207',
'Graphics', 'renderer', 'Adreno (TM) 330',
'Graphics', 'vendor', 'Qualcomm',
'MOVIES', 'LOGO', '1',
'Graphics', 'Scale UI', '1',
'Game Options', 'Footsteps', '1',
'Game Options', 'Memory Level', '1',
'Game Options', 'Mouse Scroll Speed', '36',
'Game Options', 'GUI Feedback Level', '5',
'Game Options', 'Locator Feedback Level', '3',
'Game Options', 'Bored Timeout', '3000',
'Game Options', 'Always Dither', '1',
'Game Options', 'Subtitles', '1',
'Game Options', 'Keyboard Scroll Speed', '36',
'Game Options', 'Command Sounds Frequency', '2',
'Game Options', 'Selection Sounds Frequency', '3',
'Game Options', 'Effect Text Level', '62',
'Game Options', 'Infravision', '0',
'Game Options', 'Weather', '1',
'Game Options', 'Tutorial State', '1',
'Game Options', 'Attack Sounds', '1',
'Game Options', 'Auto Pause State', '0',
'Game Options', 'Auto Pause Center', '1',
'Game Options', 'Difficulty Level', '3',
'Game Options', 'Suppress Extra Difficulty Damage', '0',
'Game Options', 'Quick Item Mapping', '1',
'Game Options', 'Environmental Audio', '1',
'Game Options', 'Heal Party on Rest', '1',
'Game Options', 'Terrain Hugging', '0',
'Game Options', 'HP Over Head', '0',
'Game Options', 'Critical Hit Screen Shake', '1',
'Game Options', 'Hotkeys On Tooltips', '1',
'Game Options', 'Area Effects Density', '100',
'Game Options', 'Duplicate Floating Text', '1',
'Game Options', 'Tiles Precache Percent', '100',
'Game Options', 'Color Circles', '1',
'Graphics', 'Zoom Lock', '0',
'Game Options', 'Over Confirm Everything', '0',
'Game Options', 'Show Learnable Spells', '1',
'Game Options', 'Render Actions', '2',
'Game Options', 'Confirm Dialog', '1',
'Multiplayer', 'Disable Banters', '1',
'Program Options', 'Disable Cosmetic Attacks', '0',
'Game Options', 'Render Travel Regions', '1',
'Game Options', 'Pausing Map', '0',
'Game Options', 'Extra Feedback', '0',
'Game Options', 'Filter Games', '1',
'Game Options', 'All Learn Spell Info', '0',
'Graphics', 'Hardware Mouse Cursor', '1',
'Game Options', 'Maximum HP', '1',
'Game Options', 'Show Character HP', '1',
'Game Options', 'Nightmare Mode', '0',
'Game Options', '3E Thief Sneak Attack', '0',
'Game Options', 'Cleric Ranger Spells', '1',
'Program Options', 'Font Name', '',
'Program Options', 'Double Byte Character Support', '0',
'Program Options', 'Drop Capitals', '1',
'Program Options', '3D Acceleration', '1',
'Program Options', 'Maximum Frame Rate', '60',
'Program Options', 'Path Search Nodes', '32000',
'Program Options', 'Tooltips', '2147483647',
'Program Options', 'Translucent Shadows', '1',
'Program Options', 'Sprite Mirror', '0',
'Fonts', 'Zoom', '248',
'Fonts', 'Size', '6',
'Program Options', 'Volume Movie', '90',
'Program Options', 'Volume Music', '40',
'Program Options', 'Volume Voices', '100',
'Program Options', 'Volume Ambients', '40',
'Program Options', 'Volume SFX', '80',
'MOVIES', 'INTRO15F', '1',
'MOVIES', 'INTRO', '1',
'Multiplayer', 'Last Protocol Used', '1',
'Game Options', 'Last Save SOA', '000000001-Quick-Save',
'MOVIES', 'BLACKPIT', '1',
'Game Options', 'Last Save TOB', '000000004-Quick-Save-TOB',
'MOVIES', 'REST', '1',
'Window', 'Maximized', '0',
'MOVIES', 'DEATHAND', '1',
'MOVIES', 'POCKETZZ', '1',
'MOVIES', 'SARADUSH', '1',
'MOVIES', 'RESTDUNG', '1'
);