Course on free software servers architecture - Second session - IPv6 - Encryption and digital signature - Security

Libresoft

The overall theme of the session that happened on the 1st of April 2011 was security.

IPv6

We started with a guest talk with Eva Castro from URJC University about IPv6. First there was an introduction about the IP system and then we when directly to the reasons why is obsolete and we need to implement IPv6 as soon as possible since NAT doesn't solve many problems an is more like a patch. Deep dive into the technical part we reviewed the changes in the datagram (simplified compared with IPv4 thus faster to handle), address format, (hard to represent 128 bits in decimal notation so groups of 4 hexadecimal digitals are formed with some abbreviations), parts of an address (prefix, network, subnetwork and host), reserved addresses and unicast, anycast and multicast addresses. This las part seems a bit complicated but in general everything seems more simple than IPv4.

More redesign has happened for the adoption of IPv6, for example ICMPv6 that includes three IPv4 protocols in it: ICMP, IGMP and ARP. One thing to highlight here is the PMTU (Path Maximum Transfer Unit) since IPv6 has eliminated the datagram fragmentation. If a packet finds an MTU smaller then than its size the host that sent it is informed that it should be sent again with the correct size instead of fragment it and assemble it back again on the destination. This is Why is interesting the maximum size that can be sent for a datagram on the origin of the package. Another advantage of ICMPv6 is that the ND process (Neighbor Discovery) is not using broadcast anymore, it uses multicast so only the nodes interested will participate in this process.

For possible solutions on implanting IPv6 on an environment that mainly uses IPv4 three alternatives were presented.

  • Encapsulation of IPv6 datagrams inside a a IPv4 like some kind of tunneling technique. When we are traversing IPv4 networks produces more overload but ensures a full IPv6 datagram on destination.
  • Double stack: a node can process both types of datagrams
  • Translation: When a IPv4 connects with a IPv6 node the datagram is converted with transformations like filling the 128 bits address with the 32 available on an IPv4 address.

Is important to mention that not only the network needs to be adapted to support IPv6 but also the applications and we have to consider how to handle legacy application that only understand IPv4. Probably we will end up adapting the code tu support various modes like IPv4 only, IPv6 only and IPv4/IPv6 mixed mode.

It was a brilliant talk that helped me very much to understand the concepts around this subject way better than the regular sessions given at the university.

Slides: IPv6

Public Key Cryptography

The following part of this session was given by Israel Herraiz and talked about Public key cryptography or asymmetric in which we don't need a shared secret between two parts in order to have secure communication. It has two main uses: to send messages to a recipient an be sure only that recipient can read it and sign pieces of information in way we can recognize who signed it and if the information has been tampered with.

We went over the need of sending data so it can only be read on destination. The main example being PGP (propietary software) and the more free and widely used GnuPG or GPG. We also look through some of the mathematical principles underneath public key cryptography based on the computational difficulty of the problem of factorizing numbers into prime components.

There is also a federated network of keyservers so the system works seamlessly and we can send messages to anyone who has published a public key. As the name implies the public key is information that should be divulged so the system works.

A very important concepts here are the chain of trust and certification since we can be sure about the communication being secret but not about the identity of the person on the other side. After all a key is just a file. That is why methods for identity checking exist like the key signing parties. This process is about signing with our own keys the public keys of others you trust. Why do we talk about chain of trust? because the trust is propagated in why if I trust Bob and Bob trusts Alice I should also trust Alice even when I didn't had any interaction with her. I like to say the friends of my friends are my friends too.

As an exercise we generated a key and uploaded it to a keyserver. Here is my key.

Before this I never used GPG actively but if you like Graphical interfaces I recommend the Seahorse app from the Gnome package to perform common operations.

This part too too long with much room for the next one but it left us with a feeling that is a broad subject that can be discussed for a long time.

Slides: Cryptography.

Security

Pedro Coca was in charge of giving this introduction about security in a server environment.

Swiftly we skimmed through basic security concepts and we took some considerations about whether having the source code publicly available makes software safer. Many papers and the ubiquity of free software solutions around the world seem to indicate that is indeed more secure. There is many companies dedicated to scan source code to detect possible bad practices form the point of view of security. One example would be Coverity.

Then we covered concepts like vulnerability, threat, risk, exposure and safekeeping to be able to perform security management. In a company each one of this points needs some attention by perform analysis and making people at our organization aware about the security problems.

We need to notice that there is a cost balance between the cost of a safeguard of an asset against a threat and the the impact of losing that asset. So in many occasions what we are doing is gambling with the chance and accept a percentage of risk since we can never be 100% safe.

There is also the topic of Access Control and the main different models that exist. The main ones are centered on the individual that we need to identify in order to give access and they are based on:

  • Something you know: A pin, password,...
  • Something you have: A key, a card,...
  • Something you are: biometric data.

A system is considered strong if it implements at least two of this techniques at the same time.

We took some time to think about passwords and how vulnerable can it be a weak password or a system that stores them poorly (the typical post-it in the desk). As an exercise he proposed to use the software John the ripper against the users stored in a test system. In Unix this passwords are stored in a file /etc/shadow.

It was a very good overview of all the risk that need to be assessed  not only regarding to software but also the physical risk that a machine might be under and by extension the data stored in them that usually is more valuable than the hardware. Security should not be a thing to consider at the end of a project since if we carry good practices regarding it we will avoid future headaches.

Slides: Security