Monday, 20 August 2007

Summary of my GSoC project

The Google Summer of Code 2007 has finished. It's time to summarize the results.

Patches submitted to the ejabberd bug tracker


Protocols that I have (partially) read during my project

Collateral tasks

  • I reported to Peter Saint-Andre all the errors that I found in the XEPs.
  • During my implementation of XEP-0033 I wrote several blog posts proposing changes in this protocol. Peter Saint-Andre will use those texts to update the protocol.
  • Start to use Emacs to format Erlang code (emacs-mode) and commit to SVN repository (psvn).
  • Start to read Joe Armstrong's new book: Programming Erlang - Software for a Concurrent World. Apply the new knowledge while programming in my GSoC tasks.
  • Start to read GSoC gift book, Karl Fogel's Producing Open Source Software - How to Run a Successful Free Software Project. Apply the new knowledge in my ejabberd tasks.
  • Continue my involvement in ejabberd as usual, which includes being active in ejabberd's forum, chatroom, mailing list and ejabberd-modules contribution SVN repository.
  • Two travels, one of them international :)

Sunday, 19 August 2007

Final GSoC project status

A week ago I posted my Almost final GSoC project status.

Since the previous status update I completed those tasks:

  • Implement or update as much as possible XEP133 Service Administration in ejabberd.
  • Prepare and submit patches to ejabberd bug tracker.
The tasks that I haven't completed and my plans to complete them are, in no special order:
  • Perform code profiling to find bottlenecks and deficiencies in mod_multicast. Improve the code. - I'll focus in that topic from now on.
  • Once I make all the possible optimizations: perform benchmarks to check mod_multicast's effect in CPU, RAM and traffic consumption.
  • Wait for ejabberd code reviewers, in case I need to fix any problem in my code before they are applied to ejaabberd trunk.
  • Discuss potential security and spam vulnerabilities (talk in JDEV and JADMIN mailint lists).
  • Add XEP33 support to ejabberd's Pub/Sub and/or PEP service once their codebase is stable.
  • Wait for Peter Saint-Andre's questions regarding his XEP-0033 update.
  • September 7th: Upload final code to Google Summer of Code hosting.
The Google Summer of Code 2007 has finished, so those remaining tasks fall out of the scope of my GSoC project timeline. However, I consider them important for my own personal project timeline. So you can expect me to work on all of them at some time.

ejabberd gets XEP-0033: Extended Stanza Addressing

I consider finished my GSoC task of implementing XEP-0033: Extended Stanza Addressing in ejabberd.

The implementation is divided in several parts:

The largest part of the code is in the multicast service (mod_multicast).

Tomorrow is the GSoC pencils down deadline. This means that I will be evaluated only for the code that I wrote until today.

I expect all my code to be eventually included in ejabberd trunk. However, I'll propose that mod_multicast is disabled by default in the example configuration. At least in the first ejabberd release that includes that module.


Benchmark

I made some benchmarks using Jabsimul. The performance indexes that I could evaluate are only the %CPU and MB of RAM consumed by the ejabberd program. I created 900 accounts, and populated each one with around 40 roster items of type 'both'. Then, using Jabsimul each logged user changed its presence every few seconds.

With the patches and the multicast service enabled with small rosters and small chatrooms (less than 5 contacts or paticipants), there's a small increase in CPU consumption. With medium-size rosters (40 roster items), the CPU consumption triplicates in respect to the stock ejabberd trunk version.

Obviously, I don't consider acceptable when the CPU consumption is multiplied by 3 just because all the packets use XEP33 with 40 destinations. The bottleneck is mod_multicast.

However, this result does not surprise me at all. During my GSoC coding I only cared about optimization in the patches that will be commited to ejabberd trunk: ejabberd_c2s, mod_muc_room and ejabberd_router_multicast. I didn't care about code optimizations in mod_multicast. For me it was far more important the functionality correctness. Now that mod_multicast works correctly, I can concentrate in improving it without breaking its correctness.

This planning allowed me to do all the stuff that I planned for my GSoC project, and finish the summer with correct and working code. During the last week of August I plan to profile, reorganize and improve mod_multicast to reduce its computational consumption as much as possible.


Unexpected improvement

The funny thing is that my patches to ejabberd core with a disabled multicast service reduce slightly the CPU consumption compared to the stock ejabberd trunk version. This means that there is a possible optimization in ejabberd that does not deal with XEP33 at all. If properly investigated, this improvement could be included in ejabberd trunk and benefit all ejabberd deployments, not only the ones with multicast enabled.

Saturday, 18 August 2007

Temporary Lists of Recipients - proposal for XEP33

When I started my Google Summer of Code project three months ago, Tobias Markmann pointed me to his Temporary Lists of Recipients proposal.

The purpose is to reduce even more the bandwidth consumption by sharing a common list of JIDs between the two entities which maintain a XEP33 communication.

The idea seems worth to be considered... once the current XEP33 is already implemented and deployed in the XMPP world. So I bookmark this proposal for future reference, and let's see what happens.

Multiple replyto and enforce all them in XEP33

Yesterday I was chatting about XEP33 with Elmex in the ejabberd chatroom. He point me to a strange topic in this protocol:
`There MAY be more than one replyto or replyroom on a stanza, in which case the reply stanza MUST be routed to all of the addresses.'
Here is the chatroom log.

What does that mean? If a client receives a message with extended stanza addresses, and 100 replyto or replyroom, and the user wants to answer, XEP33 forces the client to send the response to all 100 addresses. Why should we allow the sending entity to enforce the receiving entity to answer to all address, instead of giving him the power to answer only to some? In the email world is this enforcement also present?

I think this topic could be reconsidered for the next XEP33 version.

ejabberd gets XEP-0133: Service Administration

One of the minor tasks in my Google Summmer of Code project was to implement in ejabberd as many of the 31 commands described in XEP-0133: Service Administration as possible.

Aleksey Shchepin already implemented many commands in ejabberd more than 4 years ago. A year and a half ago Magnus Henoch updated them to use XEP-0050: Ad-Hoc Commands. So, I just had to update them a little to become XEP-0133 compliant:

  • 23. Send Announcement to Online Users
  • 24. Set Message of the Day
  • 25. Edit Message of the Day
  • 26. Delete Message of the Day
The commands that I implemented from scratch are:
  • 1. Add User
  • 2. Delete User
  • 5. End User Session
  • 6. Get User Password
  • 7. Change User Password
  • 9. Get User Last Login Time
  • 10. Get User Statistics
  • 13. Get Number of Registered Users
  • 15. Get Number of Online Users
  • 30. Restart Service
  • 31. Shut Down Service
Other commands are not implemented, and I didn't add them because I consider ejabberd already provides other ways more suitable:
  • 8. Get User Roster
  • 18. Get List of Registered Users
  • 20. Get List of Online Users
  • 27. Set Welcome Message
  • 28. Delete Welcome Message
  • 29. Edit Admin List
And finally, I didn't implement those commands because they use features not available in ejabberd:
  • 3. Disable User
  • 4. Re-Enable User
  • 11. Edit Blacklist
  • 12. Edit Whitelist
  • 14. Get Number of Disabled Users
  • 16. Get Number of Active Users
  • 17. Get Number of Idle Users
  • 19. Get List of Disabled Users
  • 21. Get List of Active Users
  • 22. Get List of Idle Users
During this task, I found and reported some typo errors to the author of the XEP (Peter Saint-Andre).

Finally, I tested most commands with Tkabber SVN, Psi SVN and Gajim SVN. Sergei Golovan quickly fixed a small bug in Tkabber, and now all three clients work perfectly :)

I'm quite happy with the result, so I took this screenshot that depicts the impressive list of commands that allow an administrator to configure ejabberd just with a Jabber client:


Note that the commands are nested in the Service Discovery to allow the admin to find them easier.

The patch is available here. I hope it has quality enough to enter ejabberd trunk easily, so it is published in the next major ejabberd release.

Tuesday, 14 August 2007

Almost final GSoC project status

A month ago I posted my Midterm GSoC project status, and remaining work.

Since the previous status update I completed those tasks:

The remaining tasks that I'm aware of, from now until the end of my GSoC project are:
  • Implement or update as much as possible XEP133 Service Administration in ejabberd.
  • Perform code profiling to find bottlenecks and deficiencies in mod_multicast. Improve the code.
  • Perform benchmarks to check mod_multicast's effect in CPU, RAM and traffic consumption.
  • Prepare and submit patches to ejabberd bug tracker.
  • Upload final code to Google Summer of Code hosting.
  • Wait for ejabberd code reviewers, in case I need to fix any problem in my code before commiting to ejabberd.
  • Discuss potential security and spam vulnerabilities (talk in JDEV and JADMIN mailint lists).
  • Add XEP33 support to ejabberd's Pub/Sub and/or PEP service if their codebase is stable at the time.

Monday, 13 August 2007

XEP33 implementations: separate service or embedded support?

The current version of XEP-0033: Extended Stanza Addressing says:

The IM service MAY implement multicast directly, or it MAY delegate that chore to a separate service.
Where must a Jabber entity send message and presence stanzas with XEP33 addresses, if they expect them to be routed as specified in XEP33? They must send them to a Jabber entity that advertises this feature: http://jabber.org/protocol/address.

What entities may support this feature? A Jabber server may have embedded support for XEP33, let's suppose the server JID is jabber.example.org. Or it can delegate that task to a separate service, which JID could be multicast.jabber.example.org.

How can a Jabber entity know if his local server supports XEP33? Asking disco#info to the server (which JID is jabber.example.org). However, this is not enough when the server delegates to a separate service. So, the entity should also ask the first-level services provided by the server: chatrooms.jabber.example.org, pubsub.jabber.example.org, ... and also multicast.jabber.example.org.

During my GSoC project, I implemented the server-part of XEP33 in an ejabberd module called mod_multicast. This module provides a separate service just for multicast. This means that an ejabberd server with JID jabber.example.org, with my work installed and enabled, will provide XEP33 support in a service with JID multicast.jabber.example.org.

I implemented it as a separate service service for efficiency reasons. I consider that listening in the main server JID for XEP33-enabled stanzas would need more code (well, no more than 30 lines of code) and more computations than listening in a specific JID.

This is not a big problem with message and presence stanzas since the main server JID is not expected to receive message or presence stanzas at all. But think about iq stanzas. The server receives a lot of iq requests, and sends iq replies. Remember that a XEP33 server will send iq queries, and receive replies from remote servers. I thought that using the main JID both for typical IQ tasks and also for multicasting would be a little mess. So I preferred to keep all multicasting separate in a specific JID.

As XEP33 gets more widely adopted, maybe it makes sense to move all the XEP33 code from mod_multicast to an internal core file, and serve it embedded instead of a separate service. But right now, I think the current solution is clean, efficient, and respects the protocol.

What about clients, and remote servers? Obviously, it isn't efficient to query all the first-level items in the server just to know if one of them supports XEP33. It would be faster to just ask the server. This translates in three aspects: more code, more CPU consumption and more bandwidth consumption.

However, they are not much a problem. Probably 20 or 30 lines of code are enough to program the loop to check all server items. And this check is done only the very first time a server queries other server. Once a server/client knows that jabber.example.org supports XEP33 in multicast.jabber.example.org, this knowledge is stored in cache. When the cache item is obsolete (maybe in 12 or 24 hours), there is no need to perform another full disco traversal! The client only needs to revalidate the cache item, asking features directly to multicast.jabber.example.org.

I'm aware of only three programs that implement XEP33, or a part of it:
  • Openfire server has basic support of XEP33. It provides the feature embedded. It only queries the server, not the services.
  • Psi client has very basic support for sending XEP33 message stanzas. It only queries the server, not the services.
  • Tkabber client has very basic support for showing extended information included in XEP33 message stanzas. Since it does not send XEP33 stanzas, it does not need to query for XEP33 support.
This means that ejabberd's mod_multicast can send to Openfire. But Openfire and Psi can't send to ejabberd because they are unaware that the ejabberd server has XEP33 support in a separate service. Note that all three programs implement XEP33 correctly. And even then, they are incompatible in practice.

Yesterday I chatted about this issue with Gaston Dombiak (Gato from Openfire) and Kevin Smith (Kev from Psi). They are interested in implementing the rest of the XEP, including the part that I explained previously. Of course, this interest is conditioned to the success of the protocol: it must be implemented also by other software, and be widely used.

So, once a new and updated version of XEP33 is published with the improvements that I proposed to Peter Saint-Andre, I'll file a bug report in the Psi and Openfire bug trackers.

Until then, I still need to do some cleaning and profiling in mod_multicast.

Friday, 10 August 2007

Summary of XEP33 addresses limits

This post summarizes and updates all what I have said in the past weeks in those posts: The limit of addresses in XEP33 must be fixed, XEP33: types of limits and default values, XEP33: Tell limits in disco#info response using XEP128, and Updates to XEP33 limits proposal.


Introduce the problematic

Let's suppose that limiting the number of destination addresses in a XEP33 stanza really serves a purpose, for example, to prevent or reduce abuse of the multicast service. To count how many 'addresses' are there in a stanza, only TO, CC and BCC addresses are considered, since those are the ones that will generate traffic consumption.

XEP33 says that a server should have a limit for the maximum number of addresses allowed on a single packet: the limit SHOULD be more than 20 and less than 100.

That limit is easy to implement on the receiving party. But what happens with the sender? How many addresses can a sender put on each packet? If it puts too many, the packet will be rejected. If it puts too few, it is not profiting of XEP33 as much as it could do.

On the current version of XEP33, remote servers allow as few as 20 and as much as 100 addresses. This means that a sender have to reduce to the minimum common in order to not get rejects: it can only send as much as 20 addresses on each packet.

If we already know that 20 is the maximum limit in practice, then why bother telling some admins that they can put 30, 40 or more on their servers? Nobody will send more than 20 addresses on each packet!


Proposed solution: configurable limits, and method to inform

Allow configurable limits on the protocol for each different condition. Define default values in the protocol. And describe a method for senders to know which limits are applied on each destination server.

Another possible limitations to reduce abuse of a multicast service are # of messages per minute, # of addresses per minute, # of total bytes sent... But I don't expect them to be interesting for inclusion in XEP33.


Types of limits

Several limits can be defined, depending in the different characteristics of a XEP33 stanza:

  • sender is: local or remote
  • the stanza type is: message or presence. Note that iq stanzas don't directly include XEP33 addresses.
There is no way to know if a XEP33 was sent by a user or a server/service. So, that categorization is not possible.

Those categories do not allow to differentiate the stanzas sent by a trusted local service (like MUC or Pub/Sub components) from the rest of possible senders. Obviously, the trusted local services operated by the same administrator that installed the multicast service should have unrestricted access to the multicast service. This possibility is an implementation specific issue which will not be covered by XEP33.

The mentioned stanza characteristics allow to define 4 different limits:
  • local message
  • local presence
  • remote message
  • remote presence
The allowed values for the limits are:
  • Positive integers, including zero: 0, 1, 2, ...
  • the key word 'infinite', which means that the limit is not applied at all.

Method to inform

This method uses XEP-0128: Service Discovery Extensions, as proposed by Ralphm in a comment.

How does this work? Currently, when an entity wants to send a XEP33 stanza, it first checks if there is a XEP33-enabled service available. To check that, it queries disco#info to the service, and looks for in the response.

If there are limits to inform about, the disco#info response does not only announce XEP33 support, but also announce which are the exact limits in effect in the service.

When a multicast service announces limits in a disco#info response, it SHOULD only report limits which are configured to a different value than the one defined as default in XEP33. So, if XEP33 says that a given limit is 20; but the limit in effect in a server is 30, then the server must tell the limit. If the limit in effect is the default value, then it SHOULD NOT be specified at all in disco#info to save bandwidth.

Similarly, when a multicast service announces limits in a disco#info response, it SHOULD only report limits which are going to be applied to the entity that performs the request. The reason is that users of the local server and users/servers/services which are remote will have different limits, and it's a waste of bandwidth to announce limits to an entity that will never be affected by them.

The entity that requested this info must cache those limits for posterior reference.

Let's see an example. The Jabber server capulet.com wants to send a stanza with XEP33 addresses to the Jabber server shakespeare.lit. The response announces XEP33 support, and also provides information of several limits:

<iq type='get'
from='capulet.com'
to='shakespeare.lit'
id='disco1'>
<query xmlns='http://jabber.org/protocol/disco#info'/>
</iq>

<iq type='result'
from='shakespeare.lit'
to='capulet.com'
id='disco1'>
<query xmlns='http://jabber.org/protocol/disco#info'>
<identity
category='server'
type='im'
name='shakespeare.lit jabber server'/>
...
<feature var='http://jabber.org/protocol/address'/>
<x xmlns='jabber:x:data' type='result'>
<field var='FORM_TYPE' type='hidden'>
<value>http://jabber.org/protocol/address</value>
</field>
<field var='message'>
<value>20</value>
</field>
<field var='presence'>
<value>infinite</value>
</field>
</x>
...
</query>
</iq>


Apply limits to incoming stanzas

When a stanza is received by a XEP33-enabled entity to be routed to other destinations, the number of destination addresses is compared to the limit which is in effect for that kind of stanza. If the stanza has more addresses of type TO, CC and BCC than the allowed, an error message is returned to the original sender.


Take into account limits when sending stanzas

When any Jabber entity is about to send a XEP33 stanza, it MUST make sure the number of destination addresses is not greater than the limit reported by the destination entity. In this case, the destinations can be split in several groups (or batches).

Tuesday, 7 August 2007

On travel for the next 3 days

This is just to inform that I'll be 'completely away from keyboard' for the next three days. The expected return date is in the evening of 9 August GMT.

Don't worry about the progress in my GSoC project: I'll carry my hand-written design diagrams of next mod_multicast code I'll write, blank papers, a black pen, a blue pen, and Programming Erlang, Software for a Concurrent World.

Updates to XEP33 limits proposal

I previously proposed some limits for number of addresses and how to tell them in disco#info response using XEP128.

All this needs some modifications, which I explain now.

1. To count how many 'addresses' are there in a stanza, only TO, CC and BCC addresses are considered, since those are the ones that will generate traffic consumption.

2. When a multicast service announces limits in a disco#info response, it SHOULD only report limits which are configured to a different value than the one defined as default in XEP33. So, if XEP33 says that a given limit is 20; but the limit in effect in a server is 30, then the server must tell the limit. If the limit in effect is the default value, then it SHOULD NOT be specified at all in disco#info to save bandwidth.

3. Similarly, when a multicast service announces limits in a disco#info response, it SHOULD only report limits which are going to be applied to the entity that performs the request. The reason is that users of the local server and users/servers/services which are remote will have different limits, and it's a waste of bandwidth to announce limits to an entity that will never be affected by them.

4. The limits that are worth considering can't be categorized in 'user' or 'server', since the multicast service does not have an easy way to know if a stanza was generated by a user or a server. So, the characteristics of a XEP33 stanza that can be used to differentiate them, and apply fine-grained limitations are:

  • sender is: local or remote
  • the stanza type is: message or presence
Those categories do not allow to differentiate the stanzas sent by a trusted local service (like MUC or Pub/Sub components) from the rest of possible senders. Obviously, the trusted local services operated by the same administrator that installed the multicast service should have unrestricted access to the multicast service. This possibility is an implementation specific issue which will not be covered by XEP33.

Monday, 6 August 2007

GSoC status update: collateral tasks

During the last week I haven't dedicated time to code in my GSoC project. Instead, I focused in other stuff not directly related, but that I consider important too.

I summarized my proposed changes to XEP33 in the XEP33 wiki page, and pinged Stpeter to take a look.

I participated in the discussions in the ejabberd mailing list about ejabberd project management, release cycle, bug tracker, etc. I hope in the next weeks there will appear documents that describe ejabberd project management, how to submit patches...

I also started to learn basic Emacs usage (it took me a full day to customize it to my needs). I'm a Vim guy, and I find it better suited for programming, but now I'll use Emacs for SVN tasks. Emacs helps with ChangeLog writing; psvn.el helps with SVN; and erlang-mode provides a standard code indentation system, among other things.

This week was not completely lost, after all. In fact, GSoC is not only to just 'produce code', but also to learn. And I learned a lot this week.

Ahh! I also started to practice car driving, for the first time in my life. There isn't a particular reason to learn now and not before. Well, maybe I thought: if I started learning Emacs, why not car driving? Self-learning rules.

Now it's time for GSoC coding. I'm designing, coding and testing XEP33 addresses limits in ejabberd's mod_multicast.