Monday, 25 January 2016

Auto complete field with Rails and React.js

For the impatience, all the code is here.

Background: I have more than 2500 customers in the app that I'm developing, so I want the ajax-auto-complete functionality in the field "customer name" when I want to create a new appointment, this way the final user can find the user easily. I coded the next React component:

// file: app/assets/javascripts/components/appointment_form.jsx

var MyList = React.createClass({
  getInitialState: function() {
    return {
       childSelectValue: undefined,
       getOptions: [],
       url: '/appointments/get_data',  // add to your routes.rb 
       options: []
      }
    },
    changeHandler: function(e) {
        console.log('In changeHandler method');
        var tmp = this.getData(e);
        this.forceUpdate();
    },
    getData: function(e) {
      e.preventDefault();
      ovalue = e.target.value // first character typed for the user 
      console.log(ovalue);
      link = {url: this.state.url, ovalue: ovalue};
      $.ajax({
        type: 'POST',
        data: link,
        url: this.state.url,
        headers: {'X-CSRFToken': Cookies.get('csrf-token')}, // rails asks for this
        cache: false,
        dataType: 'json',
        success: function(data) {
          if (data != undefined) {
            console.log( ">>>>>> Line 108 data >>>>>>>"+JSON.stringify(data));
            this.setOptions(data);
          }
        }.bind(this),
        error: function(xhr, status, err) {
          console.error(this.state.url, status, err.toString());
        }.bind(this)
      });
    },
    setOptions: function(data) {
      var tempo = [];
      for (var i = 0; i < data.length; i++) {
        var option = data[i];
        var tmp = ;
        tempo.push(tmp); // add the option
     }
     console.log( ">>>>>> Line 127 data >>>>>>>"+JSON.stringify(tempo));
     this.setState({options: tempo});
    },
    render: function() {
        return (
          <span>
            <input type="text" className="form-control" onChange={this.changeHandler} placeholder="Owner" list="slist" name="owner" />
            <datalist id="slist">{this.state.options}</datalist>
          </span>
        )
    }
});

Now include your MyList component inside the form tag:

// file:  app/assets/javascripts/components/appointment_form.jsx (continuation)

render: function() {
    return React.DOM.form({
      className: 'form-inline',
      onSubmit: this.handleSubmit
    },
    ,
    React.DOM.button({
        type: 'submit',
        className: 'button-primary',
    }, 'Create appointment'));
  });
  

In Appointments controller:

  # POST /appointments/get_data
  def get_data
    owner = params[:ovalue]  # ovalue cames from react
    results = User.where("lname ~ '#{owner}' AND group_id=2").select(:id, :fname, :lname)
    logger.debug "### get_data in appointments #####################>>>> #{params.to_json} "
    users = results.map do |r|
      {value: r.id, name: "#{r.lname} #{r.fname}" }
    end
    return render json: users.to_json  # send this back to react
  end

Results:

Thursday, 4 June 2015

Installing a Linux only OS in an UEFI hardware

So, this is the situation: twenty months ago I bought an HP Pavilion 23 All-in-one with Windows 8.1. I never use Windows, I develop rails websites on Debian. Usually when I get a new computer from any brand (HP, Dell, Toshiba, Acer) I remove Windows and all that "quick restore" crap re-partitioning my hard drive and then I install Debian.

But this HP machine has this thing called UEFI. At the time, I had no opportunity to read all the UEFI related stuff so I just shrink the Windows NTFS partition from 1.7TB to 80 GB and then I installed Debian in the new free space.

After that, I never booted into Windows and all was well until two weeks ago when I started to see weird hard drive messages. Fortunately I had time to backup all my stuff, but four days ago I got the feared "no disk found": my HD died definitively.

So I went to Mexico City downtown and I bought a $90 dollars 2TB new Hitachi hard disk.

After I came back to home, I read a couple of hours and I learned some stuff:

1) UEFI is the new BIOS.
2) Instead of say "enter into the BIOS" (the good old well known blue screen options) now you must say "enter into the UEFI options".
3) Some UEFI systems have the option to "emulate" the old BIOS and this is known as "legacy mode" or "BIOS mode".
4) If you install Linux in "legacy mode" and then you change the option to "UEFI mode" you won't be able to boot in that already installed Linux system.
5) The same happens in the other way: if you install Linux in the "UEFI mode" and then you change the UEFI to "legacy mode" you won't be able to boot into Linux.
6) There is NO reason to install Linux in "Legacy mode", Linux understands UEFI pretty well.
7) There is something called "Secure Boot" and this thing is NOT related at all with UEFI. As a Linux user, the best way to proceed is disable the "Secure Boot" option in the UEFI menu.
8) UEFI is a firmware and can be upgraded.
9) When a UEFI computer is turned on, UEFI looks for a boot loader in an special "UEFI partition" in the hard disk to load the operating system.
10) Normally on Linux systems, the bootloader is installed by grub in:  /boot/efi/EFI/debian/grubx64.efi
11) An "UEFI partition" is just a normal partition with a FAT32 format and the "boot" flag enabled.
12) On Linux, the UEFI partition must be mounted on the /boot directory.

So if you are a Linux user and you want to install a Linux only OS in an UEFI hardware you need to do this:

1) From http://www.rodsbooks.com/refind/getting.html, download the "flash drive image file". It is an .img file. Pass it to a USB drive with the dd command.
2) Using another USB drive, put the testing Debian net installer in it. As usual, use the dd command to pass the ISO file to the USB drive and boot with it.
3) Into the installer, when you get into the "HD partition" screen, create the first partition as a 500MB (0.5 GB) partition with the FAT32 option, the boot flag enabled and the label set as "UEFI". The UEFI partition must be /dev/sda1.
4) Create the other partitions as usual:  /, swap, /home. Finish the installation.
5) Boot with rodsbooks USB drive. You should be able to boot into the Debian system.

6) Install grub-uefi:

apt-get install --reinstall grub-efi
grub-install /dev/sda
update-grub

7) HP and other brand computers look for the boot loader in the "Microsoft" or the "Boot" directories so:

cd /boot/efi/EFI
mkdir Microsoft
mkdir Microsoft/Boot
cp debian/grubx64.efi Microsoft/Boot/bootmgfw.efi
mkdir Boot
cp debian/grubx64.efi Boot

Voila! now you can reboot and enter into your Debian system as usual. If you need to reinstall Debian or any other Linux flavor, just don't touch the UEFI partition and the installer will find the bootloader.

Monday, 7 April 2014

Connecting to Gigya REST API


require 'cgi' 
require 'net/http'
require 'uri'
require 'hmac-sha1'
require 'digest/sha1'
require 'base64'
  
#@ here are the parameters you need to supply from your Gigya site's settings page.
api_url = "http://socialize.gigya.com/socialize.getUserInfo"
api_key = "your_apiKey_50K9LE1sUO6mohgUE"
gigya_secret_key = "*********************************"
user_id          = "_guid_4UUBV567==" # an already registered user in your site
id               = "random" 
#@ decode secret key and prepare nonce.
gigya_secret = Base64.decode64(gigya_secret_key)
timestamp = Time.now.gmtime.to_i
nonce = "#{user_id}#{id}#{timestamp}"
http_method = "GET"  #@shmu: define your HTTP method

#@ parameters are ordered alphabetically, base string include HTTP method call and its parameters, 
# all separated with unescaped "&"
parameters = CGI.escape("UID=#{CGI.escape(user_id)}&apiKey=#{CGI.escape(api_key)}&nonce=#{CGI.escape(nonce)}&timestamp=#{timestamp}")
base_string = "#{http_method}&#{CGI.escape(api_url)}&#{parameters}"
puts "base_string:  #{base_string.inspect} \n\n"  

#@ hmac/sha1 encription for the gigya secret and the base_string
hmacsha1 = HMAC::SHA1.digest(gigya_secret, base_string)
gigya_sign = Base64.encode64(hmacsha1).chomp.gsub(/\n/,'')
gigya_sign = CGI.escape(gigya_sign) #@shmu: we must escape the signature as well.
  
#@ finalized api request url with the signed signature
request_url = "#{api_url}?apiKey=#{api_key}&nonce=#{nonce}&timestamp=#{timestamp}&UID=#{user_id}&sig=#{gigya_sign}"

#puts request_url.inspect
puts "Request_url:  #{request_url.inspect} \n\n"
  
#@ read the response
response_text = open(request_url).read

#@ handle error messages from gigya XML output.
regexp = /\<statusCode\>(.*?)\<\/statusCode\>/
status_code = response_text.scan(regexp).to_s.to_i
if status_code == 200
  okmsg = "Gigya: Content Shared: #{status_message} [#{user.nick}]"
  logger.info okmsg
  return okmsg
else
  raise "GIGYA RESPONSE ERROR: #{response_text.scan(/\<errorMessage\>(.*?)\<\/errorMessage\>/).to_s} \n\n 
#{response_text.inspect} \n\n\n [id:#{id}, user:#{user}]\n\nStatusMessage: #{status_message}\n\n Basestring:
#{base_string}\n\n RequestURL: #{request_url}\n\n\n"
end

Tuesday, 11 February 2014

Mocking and stubbing

In Rails, you probably have already dealt with the automatically generated Rspec files, that files show you the basic Test Driven-development features, but in the real world things can be more complicated, likely you've read about mocks and stubs as ways to make better Unit Test, but is not clear what they are and how use them. First I will try a definition:

A Stub is a Unit Test Technic that creates a minimal Object, this Object has the minimal attributes and methods that are required for the Test-targeted class to returns a specified result. In the other hand a Mock is a kind of stub with assertions that one or several methods get called. In other words: Stubs are "dead" simulations of Objects that we use to check if the method that we coded returns the value what we expect. Stubs are "blind" to the behaviour of our classes, they just are used to make calls to the methods in the class that we wanna test. Mocks in the other hand are Stubs but they test the behaviour, they have expectations and make assertions about the way that the methods in our class are used, it is: when, how and how many times a method is called.

Stubbing

You have a class called "MakeACake" with three methods in it: MixIngredients(), PutInOven() and Cook(), the Cook() method sets the data inside the class and calls first the MixIngredients() method and then PutInOven() method. If all is OK, the Cook() method returns the boolean 'true'.

MakeACake class needs several libraries to work properly: Milk, Eggs and Sugar. MakeACake class needs to call the method getQuantity() in those classes.

You want to create a Unit Test for your MakeACake class to be sure that it works in the right way. So you should write something like this:

milk = stub("my_Milk") # create stub objects
sugar = stub("my_Sugar")
eggs = stub("my_Eggs")

milk.stubs(:getQuantity).("1 litter") # stubs setQuantity method
sugar.stubs(:getQuantity).("0.5 kgms.")
eggs.stubs(:getQuantity).(2)

mac = MakeACake.new() # creates the object you wanna test
result = mac.Cook(milk, sugar, eggs) # pass the stubbed objects
assert_equal(true, result) # check if all was OK

This test stubs some objects and their methods, then pass the objects to the class MakeACake and finally the assert_equal() Rspec method checks that Cook() method return "true". If so, congratulations!! your class have passed its Unit Test.

But, why I wanna stub things in first place? You want to make stubs for some of these reasons:

  • Incomplete code. Suppose you are in Argentina and you want to start the code of the MakeACake class, but the developer in Costa Rica is not finishing the Eggs and Milk, libraries until the next week, you know for the UML diagrams that those libraries have a getQuantity() method. So you can stub those methods and start working in your class code.
  • Independecy. As the name "Unit Test" suggest, tests must be decoupled, in the example, stubbing the libraries (Milk, Eggs and Sugar) allows you to test MakeACake independently of the rest of the code.
  • Hard replication. Some states are difficult to replicate, for instance the method CheckNetworkFailure() can only be tested under a real failure, in this cases is better Mock an Object.
  • Velocity. Tests can take over 30 seconds: unacceptable. Stubs and mockings allow you brake the tests in smaller and fastest pieces of code.

But take note about something: this test don't know anything and don't test anything about the behaviour of the MakeACake class. Stubs are just "dead" simulations of methods in order to simulate the real-world data that a class needs to works properly.

In some way this is OK, Test Driven Development is about testing results, the return of the methods, not about testing the code itself. We don't must care about how a method do the things, we use TDD just to be sure it returns the correct value.

Mocking

But what about behaviour? Sometimes you wanna be sure that some classes do things in some way and not just get the final result of a method. At given scenarios is necessary to test a class considering its state and its sequence.

If I wanna mock the Egg class to be sure that the getQuantity() method is called at least once when MakeACake is used, I would need a code like this:

Egg = mock("my_egg")
Egg.any_instance.expects(:getQuantity).once.returns(2)

Monday, 23 December 2013

What is OpenEHR anyway?

I am not an expert in OpenEHR, but I (and the Mexican government by the way) are very interested in knowing the proper way to standardize, store and consult health information. Is almost unnecessary to justify this interest, in one hand the developing of software is too expensive to update and sometimes rebuild entire health systems every eight years to meet the new health requirements and standards. In the other hand, the health information is too important to allow that this information becomes obsolete. Most of the children born after 1995 will reach more than one hundred years of age and we are living in a era where huge amounts of genetic information will be added to the Health Record of every citizen. We must find the way that those Health Records remain always available and ready to be updated for the next century. Piece a cake, right?

Unfortunately, right now the situation is a mess: terabytes of information are --and in many cases always will be-- unreachable and useless because the software companies choose a developing approach which is great to do a LMS, a video game or an insurance tool but is a nightmare in the long run to handle complex and always changing data. Considering that the information contained in a Health Tecord can save the life of someone, somewhere, this is a pretty serious issue.

So, I have been reading the OpenEHR PDFs and until now this is my resumé:

Archetypes have definitions, ontologies and constraints. The process to create a new Archetype is called "modelling" and is do it for medical experts, who in many occasions know nothing about the software and computer world and the don't need to know about it anyway. This is why OpenEHR is called a "two modelling" specification, because the Medical Knowledge Domain remains independent from the Programmer Knowledge Domain. Archetypes are the "lego bricks" that allow us to build a full OpenEHR application and, happily, most of the Archetypes that we could need are already modeled and contained in the CKM:

http://www.openehr.org/ckm/

On the other hand, Templates are a kind of archetypes with "slots", basic archetypes are adapted and embedded in this slots to build the data what we need. For instance the "Annual Medical Check Up" template could be conformed by the "Name", "Age", "Sex", "Blood Pressure", "Is Smoker", "Heart Rate", "Menopause", etc, etc, archetypes. Templates are expressed in screen forms showing the data points of its archetypes. When you insert the basic archetypes in the template's slots you can remove parts of the basic archetypes that you don't need. So templates are nearest to a real application.

Templates are "compiled" to generate the Operational Templates (OT). This OT connect the medical world (i.e. the doctors who defined the archetypes and templates) with the programmers world and contain the info to build the models (the "M" in MVC) using Java, C#, Ruby, whatever. This pass from archetypes (health experts domain) to concrete classes (software experts domain) is make it through the Reference Model and the Archetype Object Model.

Having the Models derived from the OTs, the data can now be stored in MySQL, PosgtreSQL, whateverSQL, like any other MVC software. The Archetype Query Language allows extract data from the system if you want to build a RESTful based API to be consulted for external entities. This video explains the templates process:

Thursday, 14 November 2013

Hashing password as devise do

I am stuck in an stupid office waiting for some seal, so in order to not to lose all my day I guess I would write some tip, many of this tips are for my future myself because I hate when I've spent two hours resolving a problem and half year later I need start searching all over again, sometimes is just a parameter in a command sometimes a lib I need to compile emacs. If you are a developer you know what I'm talking about ;-)

The problem: sometimes I just wanna update a password manually using PostgreSQL console or sometimes I wanna create the initial user settled in seeds.rb file, so to do this I need hash the password first. We do this entering in the rails console:

$ bundle exec rails c RAILS_ENV=development

And now the hash:

hashed_password = ::BCrypt::Password.create("s0m3HardAndNewP44ss")

Update the user model:

user = User.find(456)
user.update_attribute('encrypted_password', hashed_password)

Or you can do it in the old SQL way:

UPDATE users SET encrypted_password='$2a$10$4R.gf6j9AmV4GAgszYVLxeCa' WHERE id=456;

And that is all!!

Monday, 23 September 2013

Symfony 2.3 para impacientes

Una empresa me pide que le actualice su intranet del 2002. Es una empresa mediana, sin mucho soporte, ponerles Ruby On Rails les elevaría mucho los costos. Aquí PHP es buena opción. Como ya conozco CakePHP al derecho y al revés me parece que es una buena oportunidad de echarle ojo a Symfony. Como soy mexicano, valiente y bragao, uso Emacs y PostgreSQL para evitar de Vim y MySQL. ;-)

Aquí se los resumo pués:

1) Instalar composer:

$curl -sS https://getcomposer.org/installer | php

2) Ejecutar:

$ php ./composer.phar create-project symfony/framework-standard-edition intranet 2.3.1

database_driver (pdo_mysql):pdo_pgsql
database_host (127.0.0.1):
database_port (null):5432
database_name (symfony):DBSYMFONY
database_user (root):postgres
database_password (null):*********
mailer_transport (smtp):
mailer_host (127.0.0.1):
mailer_user (null):
mailer_password (null):
locale (en):
secret (ThisTokenIsNotSoSecretChangeIt):

3) Una checadita:

$ php app/check.php

4) En una nueva consola de tmux corremos el the built-in web server:

$ php app/console server:run

5) Browser:

http://localhost:8000/config.php

6)creamos la DB:

$ php app/console doctrine:database:create

7) Los bundles son las pequeñas "apps" con las que está construido nuestro proyecto web, son como las apps de Django. Esta modularización es un plus de este framework. Symfony usa los NameSpaces de PHP 5.4, de este modo el nombre de los name spaces en los bundles sigue este criterio: NombreDeLaEmpresa/NombreDelProyecto/NombreDelModelo. Asi pues creamos nuestro primer bundle:

$ php app/console generate:bundle --namespace=GCP/Intranet/ImagesBundle --no-interaction --dir=./src --format=yml

OJOOOO!!!: Symfony soporta diferentes tipos de formatos en la configuración de los bundles (php, yaml, xml, annotation) pero no soporta mezclas entre ellos, los bundles deben ser todos con el mismo formato.

8) Ahora creamos una entity de Doctrine, en Ruby On Rails sería algo así como:

$rails g scaffold Image file:string tags:string user_id:integer

En Symfony:

$php app/console generate:doctrine:entity --entity=GCPIntranetImagesBundle:Image --format=yml --fields="file:string(100) tags:string(255) user_id:integer"

La columna "id serial" de PostgreSQL, Doctrine la crea por default y no es necesario ponerla.

9) Ahora queremos el CRUD para este modelo:

$php app/console generate:doctrine:crud --entity=GCPIntranetImagesBundle:Image --format=yml --with-write --no-interaction

10) Ahora hacemos la migración de la entity de Doctrine a PostgreSQL:

$php app/console doctrine:schema:update --force

Entiendo a los que les gusta Symfony, y supongo que entre más se usa, más se le toma gusto. Pero al principio la configuración de tantos archivos puede resultar tediosa pues por alguna razón el exitosísimo princpio "convención sobre configuración" brilla por su ausencia en Symfony.