Thursday, November 29, 2007

How to access Alfresco from Vista using Network Location

Abstract: This article describes setup process of Network Location in Microsoft Windows Vista to access Alfresco.

Alfresco is open source digital assets management system and has lots of useful features like support of multiple protocols for remote access. One of them is access to repository via CIFS/WebDAV, or, in plain English, from your Desktop. This gives you an easy way to transfer bunch of files to digital repo, preserving folder hierarchy.

Here is a list of actions:

1. Go to My Computer, right-click on it and choose 'Add Network Location'.
You'll see wizard window - click Next button, on the next screen choose custom location and click Next button, then in dialog window, which should ask for location of your website, type in something like http://youralfrescoinstallation.com:8080/alfresco/webdav and hit Next button, this will bring popup windows that propmts for access credentials, add username and password.

2. If all is good, you will see next window, that asks 'what do you want to name this location' - type in the name you want and hit Next button. At this step you should see final screen of the wizard, that confirms successful creation of network location, leave all default choices and hit Finish button.

3. Now you have web folder linked to your Alfresco digital assets repository and can drag
and drop files in or copy files to your computer.

Enjoy your alfresco ! :)

Possible problems:
You might get stuck on step one, because of differences in protocol implementation, when whatever you enter doesn't let you proceed to next step.
I have resolved this problem by downloading and installing following update for Microsoft Windows Vista Home Edition (called Software Update for Web Folders (KB907306) ).

http://www.microsoft.com/downloads/details.aspx?FamilyId=17C36612-632E-4C04-9382-987622ED1D64&displaylang=en

Thursday, November 15, 2007

Integrating Java and C apps on Linux.

Abstract: This article consists of several parts that describe my experience with integrating C programs into Java-based infrastructure like Java Messaging.

Part 1. How to build C client for JMS on Fedora Core 6


In this part I'll show how to build C client for Java Message Queue on Fedora Core 6. The story began with some cool application in C with needed functionality, while I had no time and resources to re-write it in Java.We'll need to install following packages:


-bash-3.1# yum install nss.i386 compat-libstdc++-33.i386 screen


This will install C compatibilty libraries, NSS and dependent libraries.
Now we need to get Open Message Queue server and client code (community version).These commands will download and extract distro into /opt/sun/mq directory.


-bash-3.1#wget --no-check-certificate https://mq.dev.java.net/files/documents/5002/66518/mq4_1-binary-Linux_X86-20070816.jar
-bash-3.1#mkdir -p /opt/sun
-bash-3.1#cd /opt/sun
-bash-3.1#unzip mq4_1-binary-Linux_X86-20070816.jar
-bash-3.1#cd mq

Lets run messaging server by issuing following commands:

-bash-3.1#screen

This will create a screen session.

-bash-3.1#/opt/sun/mq/bin/imbrokerd -tty

This will start the server outputting a bunch of information into terminal.

To detach from screen session press Ctrl + A + D (so your server will be still running and you'll be able to return to it later by hitting screen -r)


In this tutorial I would limit it to single example, while you have three available in the directory. Let's use producer_consumer for the sake of simplicity and just change working directory:


-bash-3.1#cd demo/C/producer_consumer

There you'll see two C files - Consumer.c and Producer.c

At this point you are ready to start building your clients. Let's issue following commands:

-bash-3.1# g++ -DLINUX -D_REENTRANT -I/opt/sun/mq/include -o Producer -L/opt/sun/mq/lib -lmqcrt

You may see warning that complaints on possible conflict:

Producer.c/usr/bin/ld: warning: libstdc++.so.5, needed by /opt/sun/mq/lib/libmqcrt.so, may conflict with libstdc++.so.6

If we look at directory contents now, it should contain binary with file name Producer. If you try running it, it might spit out following error:

-bash-3.1# ./Producer

./Producer: error while loading shared libraries: libmqcrt.so.1: cannot open shared object file: No such file or directory

This is easily fixed by adding libraries location to the path. Lets create file

-bash-3.1# nano /etc/ld.so.conf.d/mq.conf

and add /opt/sun/mq/lib to it.

Then just run following command:

-bash-3.1#ldconfig

Now you should be able to build both source files and run it without errors:

-bash-3.1#g++ -DLINUX -D_REENTRANT -I/opt/sun/mq/include -o Consumer -L/opt/sun/mq/lib -lmqcrt Consumer.c

-bash-3.1#g++ -DLINUX -D_REENTRANT -I/opt/sun/mq/include -o Producer -L/opt/sun/mq/lib -lmqcrt Producer.c

-bash-3.1#ln -s Producer p; ln -s Consumer c; ls -al

At this point you should have binaries of C client that are able to publish and read messages to Messaging server.

For usage options run

-bash-3.1#./Producer -help

or

-bash-3.1#./Consumer help

If you need to know more about messaging server, that we used in this tutorial, visit this url http://www.sun.com/software/products/message_queue/index.xml. In the next part of tutorial I'll describe integration of C client into existing application. Stay tuned.

Monday, October 1, 2007

How to setup Elastic Drive in Fedora Core 6 running under VMWare

Abstract: This article describes instructions on installation of Elastic Drive in FC6 that runs under VMWare.

Having received several requests on helping to install Elastic Drive under Fedora Core 6, I've set up a short tutorial, that you'll see below.

VMWARE

To start, you'll need to have VMware Player or Server installed.
I've downloaded Fedora Core from this place , but looks like it not very different from the fedora site in a sense that you still need to set some configuration for your Fedora. This would take about 2 minutes to accomplish.

Set your system time to be in sync with S3, since it doesn't allow difference more than 10 minutes. I would go to System > Administration > Date and Time and checkmark 'system clock uses UTC' and choose Americas etc to have EDT. Then set you system time to match S3 precisely in terminal (or you may do it with above utilities):

date -s "enter S3 time here"

Then I've applied latest updates and proceeded to terminal, where entered following:

SETUP PRE-REQUISITES

yum install fuse fuse-devel python-devel gcc-c++ gcc gcc-devel glibc-devel glibc-headers libgomp libstdc++-devel


SETUP ELASTIC DRIVE

cd /opt

wget http://www.elasticdrive.com/uploads/media/elasticdrive-0.4.0_dist.tar.gz

tar zxf elasticdrive-0.4.0_dist.tar.gz

ln -s elasticdrive-0.4.0_dist e

cd e

./install

mkdir -p /fuse /fuse2 /data/s3

RUN ELASTIC DRIVE

At this point, if you don't have any error, you have elasticdrive application installed and ready to run. As recommended, you'll need to edit configuration file, located at /etc/elasticdrive.ini (we'll omit configuration details, since it is well described at the site)
To run the application, enter following in terminal:

elasticdrive /etc/elasticdrive.ini -d

ps aux | grep elastic

ls -al /fuse2

If you see it found in list of processes, then it is running. Also the latter command should show something like ed0.


SETUP FILESYSTEM

If you have all of the above show up, let's do file system stuff:

mke2fs -b 4096 /fuse2/ed0
// choose 'y' when you see Proceed anyway? (y,n)

mount -o loop /fuse2/ed0 /data/s3


TEST ELASTIC DRIVE

At this point you should have file system ready for work. Try copy some files into /data/s3 or test it some other ways.

df -h

cd /data/s3

wget http://www.elasticdrive.com/uploads/media/elasticdrive-0.4.0_dist.tar.gz

Tuesday, September 25, 2007

How to use Elastic Drive VMWare virtual appliance

Abstract: This article describes installation of Elastic Drive application as VMWare virtual appliance.

VMWARE PLAYER OR SERVER

- download virtual appliance from following link:
http://s3.amazonaws.com/vmware_appliances/khaz_VMWare_Debian_ElasticDrive_Public.rar
- unpack it to your Virtual Machines directory.
- File > Open > Browse > path to your dir > filename.vmx
- in Debian terminal, when you see command prompt, use following credentials: username khaz / password khaz / superuser khazrocks99

VMWARE APPLIANCE CONFIGURATION

- change root passwd, delete user khaz
- setup environment up to your taste, if you want

CONFIGURING APPLICATION

- edit /etc/elasticdrive.ini
- add credentials in [drives] section (after 's3://S3ACCESSKEY:S3SECRETKEY')
- create bucket name (maybe instance id + number)
- specify disk size by setting '&blocks=' (default setting of 65536 blocks gives you 268Mb)

RUNNING APPLICATION

- /etc/init.d/elasticdrive_khaz start
- application should automatically startup on reboot

USING YOUR NEW FILESYSTEM

Now you can use your filesystem, that is persistent to Amazon Simple Storage (S3).
Try copy some files to /data/s3, or 'umount /fuse2/ed0' (if you have lots of data, this would require more time, well, up to 30 minutes), 'ls -al /data/s3' should show nothing when unmounted, reboot, 'ls -al /data/s3'.

EVACUATION OF DATA FROM VMWARE APPLIANCE

This basically covers unmounting your current filesystem and transferring it's state to external persistence source.
Initially I tested creation of 24Gb, placing different size files and unmounting. First umount took about 20 minutes or so, all subsequent worked in a less than minute.

Thursday, September 20, 2007

Setting Up Elastic Drive at Amazon EC2

Abstract: This article contains intructions on running Debian Linux (etch) on Amazon EC2 with Elastic Drive that links instance to S3 storage.

EC2 CONTROL TOOLS
- run instance of following AMI ami-7cfd1815, get public DNS address
- ssh in to that public DNS address, using following credentials: username khaz / password khaz / superuser khazrocks99

EC2 INSTANCE CONFIGURATION
- change root passwd, delete user khaz
- setup environment up to your taste, if you want

CONFIGURING APPLICATION
- edit /etc/elasticdrive.ini
- add credentials in [drives] section (after 's3://S3ACCESSKEY:S3SECRETKEY')
- create bucket name (maybe instance id + number)
- specify disk size by setting '&blocks=' (default setting of 65536 blocks gives you 268Mb)

RUNNING APPLICATION

- /etc/init.d/elasticdrive_khaz start
- application should automatically startup on reboot

USING YOUR NEW FILESYSTEM

Now you can use your filesystem, that is persistent to Amazon Simple Storage (S3).
Try copy some files to /data/s3, or 'umount /fuse2/ed0' (if you have lots of data, this would require more time, well, up to 30 minutes), 'ls -al /data/s3' should show nothing when unmounted, reboot, 'ls -al /data/s3'.

EVACUATION OF DATA FROM EC2 INSTANCE

This basically covers unmounting your current filesystem and transferring it's state to external persistence source. Initially I tested creation of 24Gb, placing different size files and unmounting. First umount took about 20 minutes or so, all subsequent worked in a less than minute.

Thursday, August 23, 2007

Treemap Widget in Dojo GFX

Last week I was working on implementing 'bin packing' algorithm and visualizing output on 2D plane, using DOJO toolkit. At the time of this post it's available here: http://facebook.enomalylabs.com/treemap7.php

It shows categories of footprint and descendant items. Implemented in dojo gfx (SVG/VML).

Some information on bin packing:

In computational complexity theory, the bin packing problem is a combinatorial NP-hard problem. In it, objects of different volumes must be packed into a finite number of bins of capacity V in a way that minimizes the number of bins used.

There are many variations of this problem, such as 2D packing, linear packing, packing by weight, packing by cost, and so on. They have many applications, such as filling up containers, loading trucks with weight capacity, and creating file backup in removable media.

Since it is NP-hard, the most efficient known algorithms use heuristics to accomplish results which, though very good in most cases, may not be the optimal solution. For example, the first fit algorithm provides a fast but often nonoptimal solution, involving placing each item into the first bin in which it will fit. It requires O(n log n) time. The algorithm can be made much more effective by first sorting the list of elements into decreasing order (sometimes known as the first-fit decreasing algorithm), although this does not guarantee an optimal solution, and for longer lists may increase the running time of the algorithm.

update:

Got an idea to use the treemap widget in Ganglia project.
It contains two rationales:
a) use treemap to visualize metrics, so it is more compact and doesn't poll every X amount of time.
b) use client-side code to render readings and be more compact way to display multiple parameters

In the following demo I used dojo toolkit (version 0.9) and php:http://facebook.enomalylabs.com/ganglia.php

Dojo's SVG support is still experimental though (i think it is a wrapper interface for both SVG implementations).

I'm planning on implementation of another approach which uses curves instead of rectangles (based on Voronoi tesselations).

Sunday, August 5, 2007

How to install Attansic L2 Network Driver in Fedora Core 7

Having bought new box for my kids, I've got ASUS P5GC-MX motherboard with Intel chipset and installed Fedora Core 7 (Live CD).

All worked fine except integrated network card from Attansic (Taiwan). First of all, I couldn't figure out what type of device I have, there were two Linux drivers provided: L1 and L2.

A little bit of googling and voila - it's L2 !
As it happened, the driver required building Linux kernel module (responsible for network) as LiveCD install didn't have it by default.
What ?!
Building kernel module ? Hmm... (why don't I just install windows, I asked myself several times during this procedure :) )

My default method to install all the required components is yum, but it was useless without network setup. So I had to download all packages to my another desktop, copy them to USB key and then use the key at target Linux box.

You would need to get kernel-headers-2.6.21-1.3194.fc7.i386.rpm, kernel-devel-2.6.21-1.3194.fc7.i686.rpm, gcc with dependencies
- glibc-2.6-3.i386.rpm,
- glibc-headers-2.6-3.i386.rpm,
- libgomp-4.1.2-12.i386.rpm,
- glibc-devel-2.6-3.i386.rpm,
- gcc-4.1.2-12.i386.rpm,
- cpp-4.1.2-12.i386.rpm,


and install it using RPM (e.g. rpm -Uvh _name_here.rpm).

To verify that you are ready to build kernel module, check content of /usr/src path, it should contain 'kernels' directory.
Now follow instructions and change current dir to /src/ dir with driver source code and do:
'make install'.

I had some issues with drivers provided on motherboard CD, as happened it was outdated version something like 0.2.40.0. It complained on missing config.h etc

After some googling, I figure out, that version 0.2.40.0 is for older kernels (before 2.6.21). Then I found newer version of the driver and got rid of config.h and other header errors, but got another ones about non-declared members of
struct. A short look at C code showed that some debugging parts are excessive, so simple commenting did the trick.

At this point I'd ran 'make install' , then 'cd /lib/modules//kernel/drivers/net', 'insmod atl2.ko' and got network device working. Then I went to Network configuration and added ip address and dns information.
After device activation I was able to browse internet.

Newer version of driver is (was) available here:
http://launchpadlibrarian.net/7382416/L2-linux-driver_new.rar

RPMs are available at ftp://download.fedora.redhat.com/pub/fedora/linux/releases/7/Everything/i386/os/Fedora/

Good luck !

ps. also found this site (it has Linux Drivers package, but download was so slow that i didn't test the package) http://support.asus.com/download/download_item.aspx?model=P5GC-MX&product=1&type=map&mapindex=3&SLanguage=en-us


pps.

When applying updates that include kernel updates, after reboot it loads new kernel and module we built is not loaded, so no network would be available. In my case I just edit grub.conf and set old kernel (the one we build driver for) and reboot.

Wednesday, June 13, 2007

Technology advancement

interesting how computer technology advanced last decade, in 1990 it was a privilege of math or EE graduates and some kids, who had techically inclined parents and now my 5 year old son is happily playing pretty complex games.

Friday, April 6, 2007

Alfresco 2.0 Community Edition

Today I've installed Alfresco 2.0 community edition. The process wasn't different from version 1.4, usually I use .war file and deploy it on tomcat. This time it was done at amazon elastic compute cloud instance under fedora core 4, using my previous image with tomcat and mysql already setup. To cleanup, i've removed .war file and respective directories, dropped mysql database and recreated it again. [to be continued]

Sunday, March 18, 2007

Cluster Management

Today I've installed:
Moab Cluster Suite® is a professional cluster management solution that integrates scheduling, managing, monitoring and reporting of cluster workloads. Moab Cluster Suite simplifies and unifies management across one or multiple hardware, operating system, storage, network, license and resource manager environments to increase the ROI of cluster investments. Its task-oriented graphical management and flexible policy capabilities provide an intelligent management layer that guarantees service levels, speeds job processing and easily accommodates additional resources.

http://domu-12-31-34-00-01-eb.usma2.compute.amazonaws.com:8080/map/

and Gold
Gold is an open source accounting system developed
by Pacific Northwest National Laboratory (PNNL) as part of the Department of Energy (DOE) Scalable Systems Software Project (SSS). It tracks resource usage on High Performance Computers and acts much like a bank, establishing accounts in order to pre-allocate user and project resource usage over specific nodes and timeframe. Gold provides balance and usage feedback to users, managers, and system administrators.

Users of Moab Workload Manager can integrate with Gold to track and charge job resource usage, providing greater control over who is using the cluster or grid resources.

http://domu-12-31-34-00-01-eb.usma2.compute.amazonaws.com/cgi-bin/gold/index.cgi

Overall resume: pretty decent piece of software

EC2 Amazon - QEMU Windows Images

This is an update. I have my account approved at sourceforge, so all qemu images will be hosted at this address (http://sourceforge.net/projects/qemuwinrepo/).

Wednesday, February 28, 2007

EC2 Amazon - QEMU Windows Images

Recently I've succeeded to run Microsoft Windows Server 2003 on Amazon Elastic Compute Cloud. That was done by running it under Qemu (read more on it). I've used trial version of windows, to be safe from legal point of view (though I'm not a lawyer) and ran installation procedure, which led to working copy of OS, available for remote administration. It was released for public
as freely available Amazon Machine Image (read more on it).

Some people were asking to create AMI image with MSSQL server or Oracle etc, but I've got an idea to create a repository of Qemu images (an analog of VMware appliances), so that you can have base Qemu image and different overlays for various install layouts. This would save space and traffic, as well as decrease installation time considerably.

Currently I'm waiting on response to my registration at SourceForge. If they allow hosting of this repository, I'll put my images there and place a link to it here.

Zend PHP IDE review

Though I use EditPlus on a daily basis, here is list of features in Zend IDE, that I personally like:
  1. Templates
  2. Conditional breakpoints
  3. Docking windows
  4. Profiler (it also has remote profiling)
  5. Code Analyzer
  6. Clone View
  7. CVS Integration
  8. Real-time errors
  9. CVS Diff
  10. Goto source
  11. Code folding

I like Google more and more

I like recent Google's Apps like spreadsheet and docs, now I don't need to have MS/Open Office installed. It also has integration of your mail, blog, calendar and documents, which is very convinient. Good work Google !

Cloud Computing Google Group