Tuesday, September 22, 2009

Stored Procedure

Stored Procedures are compiled SQL code stored in the database. Calling stored procedures as opposed to sending over query strings improves the performance of a web application. Not only is there less network traffic since only short commands are sent instead of long query stings, but the execution of the actually code itself also improves. The reason is because a stored procedure is already compiled ahead of time. Stored procedures are also cached by SQL Server when they are run to speed things up for subsequent calls.

Other than performance, stored procedures are also helpful because they provide another layer of abstraction for your web application. For instance, you can change a query in a stored procedure and get different results without having to recompile your objects. Using stored procedures also makes your objects cleaner, and SQL Server’s convenient backup tool makes it easy to back them up.


Suppose we have the following query that retrieves messages from a given thread:

SELECT message_id,
thread_id,
user_id,
first_names,
last_name,
email,
subject,
body,
date_submitted,
category_name,
category_id,

last_edited
FROM message_view
WHERE thread_id = @iThreadID
ORDER BY date_submitted asc



To put this query in a stored procedure using the query analyzer, we simply have to give it a name (GetThreadMessages) and tell it what inputs (@iThreadID int) it requires. The name of the procedure goes in the create statement, then come the comma separated inputs, and finally the AS keyword followed by the procedure. The resulting statement would look like this:


Create Procedure GetThreadMessages
@iThreadID int

AS

SELECT message_id,
thread_id,
user_id,
first_names,
last_name,
email,
subject,
body,
date_submitted,
category_name,
category_id,

last_edited
FROM message_view
WHERE thread_id = @iThreadID
ORDER BY date_submitted asc


Friday, September 18, 2009

mmap() System call

The mmap() system call can be made multiple times on the same sg_fd. The munmap() system call is not required if close() is called on sg_fd. Mmap-ed IO is well-behaved when a process is fork()-ed (or the equivalent finer grained clone() system call is made). In the case of a fork(), 2 processes will be sharing the same memory mapped area together with the sg driver for a sg_fd and the last one to close the sg_fd (or exit) will cause the shared memory to be freed.
It is assumed that if the default reserved buffer size of 32 KB is not sufficient then a ioctl(SG_SET_RESERVED_SIZE) call is made prior to any calls to mmap(). If the required size is not a multiple of the kernel's page size (returned by getpagesize() system call) then the size passed to ioctl(SG_SET_RESERVED_SIZE) should be rounded up to the next page size multiple.
Mmap-ed IO is requested by setting (or or-ing in) the SG_FLAG_MMAP_IO constant into the flag member of the the sg_io_hdr structure prior to a call to write() or ioctl(SG_IO). The logic to do mmap-ed IO _assumes_ that an appropriate mmap() call has been made by the application



Example of open file for read mode


#include
#include
#include
#include
#include
#include
#include

#define FILEPATH "/home/tarun/c_prog/mmap_test.txt"
#define NUMINTS (10)
#define FILESIZE (NUMINTS * sizeof(int))

int main(int argc, char *argv[])
{
int i;
int fd;
int *map; /* mmapped array of int's */

fd = open(FILEPATH, O_RDONLY);
if (fd == -1) {
perror("Error opening file for reading");
exit(EXIT_FAILURE);
}

map = mmap(0, FILESIZE, PROT_READ, MAP_SHARED, fd, 0);
if (map == MAP_FAILED) {
close(fd);
perror("Error mmapping the file");
exit(EXIT_FAILURE);
}

/* Read the file int-by-int from the mmap
*/
for (i = 1; i <=NUMINTS; ++i) {
printf("%d: %d\n", i, map[i]);
}

if (munmap(map, FILESIZE) == -1) {
perror("Error un-mmapping the file");
}
close(fd);
return 0;
}

Tuesday, September 15, 2009

Log Information in Android Application Development

For java file :

import android.util.Log;

Log.v(TAG, "Message");

Log.v(TAG, "Message" + msg);

For getting Log information

In eclipse Window->open perspective->DDMS

Here we can see log information

Friday, September 11, 2009

An EDF scheduling class for the Linux kernel

Linux is a General Purpose Operating System (GPOS) originally designed to be used in server or desktop environments. Since then, Linux has evolved and grown to be used in almost all computer areas.

An important part of Linux is the process scheduler (or simply the scheduler). This component of the kernel selects which process to execute at any instant of time, and is responsible of dividing the finite resource of processor time between all runnable processes on the system.

During the last years, there has been a considerable interest in using Linux also for real-time control systems, from both academic institutions and companies. The main reason for this raising interest is the free availability of the source code, under the terms of the GPL license. The free availability implies that using Linux in industrial devices is more convenient than using other commercial operating systems (which are typically very expensive or subject to royalties). The availability of the source code, moreover, allows software developers to further improve the already excellent performance of Linux by customizing the source code according to their specific needs.

Unfortunately, Linux has not been designed to be a Real-Time Operating System (RTOS), thus not much attention has been given to real-time issues. The default scheduling policies of Linux (i.e., SCHED_NORMAL and SCHED_BATCH) perform very well in their own domains of application, but cannot provide any guarantee to time-sensitive tasks. To make an example, there is no way of specifying that a task must execute for 20msec every 100msec. In addition, the time between two consecutive executions of a task is not deterministic and highly depends on the number of tasks running in the system at that time.

Linux also provides some POSIX-compliant fixed-priority scheduling policies (i.e., SCHED_RR and SCHED_FIFO). These policies, however, are not much sophisticated and often do not suit the specific application requirements. No concept of time is associated to these policies, therefore it is not possible to set any deadline for the tasks. Moreover, it is known from the real-time literature that, on uniprocessor systems, fixed-priority policies cannot guarantee real-time constraints when the processor is fully loaded: schedulability analysis for fixed-priority algorithms, in fact, can guarantee real-time constraints only for processor utilizations below a certain threshold, which depends on the characteristics of the task set and that can be as lower as 69% for uniprocessors.

These issues are particularly critical when designing time-sensitive or control applications (e.g., MPEG players) for embedded devices like smart-phones. These devices, in fact, are characterized by constraints on size, power consumption and cost. Embedded system designers, therefore, need to obtain the maximum performance from the embedded processor, in order to meet the timing constraints of the application even with not powerful hardware. Fixed-priority policies cannot ensure real-time constraints when the processor utilization is above a certain threshold, therefore they do not allow to fully exploit the performance of embedded systems. On the other hand, without a deterministic real-time policy, it is not possible to make a feasibility study of the system under development, and developers cannot be sure that the timing requirements of tasks will be met under any circumstance. These issues are among the ones that prevent the usage of Linux in real industrial contexts.

To overcome these problems, some companies operating in the industrial market started selling modified versions of the Linux kernel with some kind of support for real-time scheduling. However, these non-standard versions of Linux often are not free, vanishing the main reason for using Linux in industrial contexts. Moreover, they do not have the support of the huge development community of the standard kernel.

Real-time institutions and independent developers, on the other hand, have proposed some real-time extensions to the Linux scheduler during the last years. Unfortunately, none of these extensions eventually became part of the official Linux kernel. In the meantime, the Linux scheduler has been rewritten from scratch more than once. Very recently, the addition of further scheduling policies has been made easier through the integration of some mechanisms inside the standard scheduler itself. The latest Linux scheduler (i.e., CFS), in fact, is a modular scheduler: It allows to define further scheduling policies and to add new schedulers to handle the policies introduced. The binding between the new policy and the new scheduler is done through a set of "hooks'' (i.e., function pointers) at kernel level.

We believe that to be very "general'', Linux should also provide enhanced real-time scheduling support, still allowing to schedule non-real-time tasks in the usual way. In this paper, thus, we propose an implementation of the Earliest Deadline First (EDF) algorithm. This is a dynamic-priority algorithm well-known in the real-time literature. One interesting feature of this algorithm is that, on single processor systems, it can guarantee real-time constraints even when the processor is fully utilized (i.e., utilization equal to 100%). The new scheduling policy introduced by our implementation has been called SCHED_EDF.

With respect to similar work proposed in the past, our implementation takes advantage of the modularity offered by the new Linux scheduler, leaving the current behavior of existing policies unchanged. The new scheduling class is integrated with the latest Linux scheduler, and relies on standard Linux mechanisms (e.g., control groups), thus it natively supports multicore platforms and provides hierarchical scheduling through a standard API. Our implementation does not make any restrictive assumption on the characteristics of the task. Thus, unlike other schedulers, it can handle both periodic and aperiodic tasks.

A very interesting feature of this scheduling class is the "temporal isolation'' among the tasks handled. The temporal behavior of each task (i.e., its ability to meet its deadlines) is not affected by the behavior of the other tasks: if a task misbehaves and requires a large execution time, it cannot monopolize the processor. Thanks to the temporal isolation property, each task executes as it were on a slower dedicated processor, so that it is possible to provide real-time guarantees on a per-task basis. Such property is particularly useful when mixing hard and soft real-time tasks on the same system.

The paper is organized as follows. At the beginning we introduce definitions and characteristics concerning real-time systems as well as the scheduling model that will be used throughout the rest of the paper. Then, a brief overview of most notable approaches and implementations proposed in the last decade is provided. The current Linux scheduler (i.e., CFS) is explained, focusing on the modular facilities that it offers. We then describe the internals of the proposed implementation (i.e., SCHED_EDF), and we show how it has been integrated with existing mechanisms available on Linux (e.g., control groups). A full description of the API available at user-level is provided. Last but not least, our implementation is evaluated and validated through a set of tests and experiments on real hardware that will help understanding the main differences between SCHED_EDF and the existing scheduling policies already available on Linux.

Wednesday, September 9, 2009

Testing Losless Join Algorithm

Testing algorithm
input: A relation R, a decomposition D = {R1, R2,..., Rm} of R, and
a set F of function dependencies.

1. Create an initial matrix S with one row i for each relation Ri in
D, and one column j for each attribute Aj in R.
2. Set S(i, j) := bij for all matrix entries.
3. For each row i representing relation schema Ri Do
{for each column j representing Aj do
{if relation Ri includes attribute Aj then
set S(i, j) := aj;}
4. Repeat the following loop until a complete loop execution results
in no changes to S.

{for each function dependency X  Y in F do
for all rows in S which have the same symbols in the
columns corresponding to attributes in X do
{make the symbols in each column that correspond to
an attribute in Y be the same in all these rows as follows:
if any of the rows has an “a” symbol for the column,
set the other rows to the same “a” symbol in the column.
If no “a” symbol exists for the attribute in any of the
rows, choose one of the “b” symbols that appear in one
of the rows for the attribute and set the other rows to
that same “b” symbol in the column;}}
5. If a row is made up entirely of “a” symbols, then the decompo-
sition has the lossless join property; otherwise it does not.

Scheduling

Linux

Since version 2.5 of the kernel, Linux has used a multilevel feedback queue with priority levels ranging from 0-140. 0-99 are reserved for real-time tasks and 100-140 are considered nice task levels. For real-time tasks, the time quantum for switching processes is approximately 200 ms and 10 ms for nice tasks. The scheduler will run through the queue of all ready processes, letting the highest ones go first and run through their time slice, and afterwards they will be placed in an expired queue. Then when the active queue is empty the expired queue will then be the active and vice versa. From versions 2.6 to 2.6.23, the kernel used an O(1) scheduler. In version 2.6.23, they replaced this method with the Completely Fair Scheduler that uses Red Black trees instead of queues.

sched_setscheduler, sched_getscheduler


NAME

       sched_setscheduler, sched_getscheduler - set and get scheduling policy/parame-
ters

SYNOPSIS

       #include 

int sched_setscheduler(pid_t pid, int policy,
const struct sched_param *param);

int sched_getscheduler(pid_t pid);

struct sched_param {
...
int sched_priority;
...
};

DESCRIPTION top

       sched_setscheduler() sets both the scheduling policy and the associated
parameters for the process whose ID is specified in pid. If pid equals zero,
the scheduling policy and parameters of the calling process will be set. The
interpretation of the argument param depends on the selected policy.
Currently, Linux supports the following "normal" (i.e., non-real-time)
scheduling policies:

SCHED_OTHER the standard round-robin time-sharing policy;

SCHED_BATCH for "batch" style execution of processes; and

SCHED_IDLE for running very low priority background jobs.

The following "real-time" policies are also supported, for special time-
critical applications that need precise control over the way in which runnable
processes are selected for execution:

SCHED_FIFO a first-in, first-out policy; and

SCHED_RR a round-robin policy.

The semantics of each of these policies are detailed below.

sched_getscheduler() queries the scheduling policy currently applied to the
process identified by pid. If pid equals zero, the policy of the calling
process will be retrieved.

Scheduling Policies

       The scheduler is the kernel component that decides which runnable process will
be executed by the CPU next. Each process has an associated scheduling policy
and a static scheduling priority, sched_priority; these are the settings that
are modified by sched_setscheduler(). The scheduler makes it decisions based
on knowledge of the scheduling policy and static priority of all processes on
the system.

For processes scheduled under one of the normal scheduling policies
(SCHED_OTHER, SCHED_IDLE, SCHED_BATCH), sched_priority is not used in
scheduling decisions (it must be specified as 0).

Processes scheduled under one of the real-time policies (SCHED_FIFO, SCHED_RR)
have a sched_priority value in the range 1 (low) to 99 (high). (As the
numbers imply, real-time processes always have higher priority than normal
processes.) Note well: POSIX.1-2001 only requires an implementation to
support a minimum 32 distinct priority levels for the real-time policies, and
some systems supply just this minimum. Portable programs should use
sched_get_priority_min(2) and sched_get_priority_max(2) to find the range of
priorities supported for a particular policy.

Conceptually, the scheduler maintains a list of runnable processes for each
possible sched_priority value. In order to determine which process runs next,
the scheduler looks for the non-empty list with the highest static priority
and selects the process at the head of this list.

A process's scheduling policy determines where it will be inserted into the
list of processes with equal static priority and how it will move inside this
list.

All scheduling is preemptive: if a process with a higher static priority
becomes ready to run, the currently running process will be preempted and
returned to the wait list for its static priority level. The scheduling
policy only determines the ordering within the list of runnable processes with
equal static priority.

SCHED_FIFO: First In-First Out scheduling

       SCHED_FIFO can only be used with static priorities higher than 0, which means
that when a SCHED_FIFO processes becomes runnable, it will always immediately
preempt any currently running SCHED_OTHER, SCHED_BATCH, or SCHED_IDLE process.
SCHED_FIFO is a simple scheduling algorithm without time slicing. For
processes scheduled under the SCHED_FIFO policy, the following rules apply:

* A SCHED_FIFO process that has been preempted by another process of higher
priority will stay at the head of the list for its priority and will resume
execution as soon as all processes of higher priority are blocked again.

* When a SCHED_FIFO process becomes runnable, it will be inserted at the end
of the list for its priority.

* A call to sched_setscheduler() or sched_setparam(2) will put the SCHED_FIFO
(or SCHED_RR) process identified by pid at the start of the list if it was
runnable. As a consequence, it may preempt the currently running process
if it has the same priority. (POSIX.1-2001 specifies that the process
should go to the end of the list.)

* A process calling sched_yield(2) will be put at the end of the list.

No other events will move a process scheduled under the SCHED_FIFO policy in
the wait list of runnable processes with equal static priority.

A SCHED_FIFO process runs until either it is blocked by an I/O request, it is
preempted by a higher priority process, or it calls sched_yield(2).

SCHED_RR: Round Robin scheduling

       SCHED_RR is a simple enhancement of SCHED_FIFO.  Everything described above
for SCHED_FIFO also applies to SCHED_RR, except that each process is only
allowed to run for a maximum time quantum. If a SCHED_RR process has been
running for a time period equal to or longer than the time quantum, it will be
put at the end of the list for its priority. A SCHED_RR process that has been
preempted by a higher priority process and subsequently resumes execution as a
running process will complete the unexpired portion of its round robin time
quantum. The length of the time quantum can be retrieved using
sched_rr_get_interval(2).

SCHED_OTHER: Default Linux time-sharing scheduling

       SCHED_OTHER can only be used at static priority 0.  SCHED_OTHER is the
standard Linux time-sharing scheduler that is intended for all processes that
do not require the special real-time mechanisms. The process to run is chosen
from the static priority 0 list based on a dynamic priority that is determined
only inside this list. The dynamic priority is based on the nice value (set
by nice(2) or setpriority(2)) and increased for each time quantum the process
is ready to run, but denied to run by the scheduler. This ensures fair
progress among all SCHED_OTHER processes.

SCHED_BATCH: Scheduling batch processes

       (Since Linux 2.6.16.)  SCHED_BATCH can only be used at static priority 0.
This policy is similar to SCHED_OTHER in that it schedules the process
according to its dynamic priority (based on the nice value). The difference
is that this policy will cause the scheduler to always assume that the process
is CPU-intensive. Consequently, the scheduler will apply a small scheduling
penalty with respect to wakeup behaviour, so that this process is mildly
disfavored in scheduling decisions.

This policy is useful for workloads that are non-interactive, but do not want
to lower their nice value, and for workloads that want a deterministic
scheduling policy without interactivity causing extra preemptions (between the
workload's tasks).

SCHED_IDLE: Scheduling very low priority jobs

       (Since Linux 2.6.23.)  SCHED_IDLE can only be used at static priority 0; the
process nice value has no influence for this policy.

This policy is intended for running jobs at extremely low priority (lower even
than a +19 nice value with the SCHED_OTHER or SCHED_BATCH policies).

Privileges and resource limits

       In Linux kernels before 2.6.12, only privileged (CAP_SYS_NICE) processes can
set a non-zero static priority (i.e., set a real-time scheduling policy). The
only change that an unprivileged process can make is to set the SCHED_OTHER
policy, and this can only be done if the effective user ID of the caller of
sched_setscheduler() matches the real or effective user ID of the target
process (i.e., the process specified by pid) whose policy is being changed.

Since Linux 2.6.12, the RLIMIT_RTPRIO resource limit defines a ceiling on an
unprivileged process's static priority for the SCHED_RR and SCHED_FIFO
policies. The rules for changing scheduling policy and priority are as
follows:

* If an unprivileged process has a non-zero RLIMIT_RTPRIO soft limit, then it
can change its scheduling policy and priority, subject to the restriction
that the priority cannot be set to a value higher than the maximum of its
current priority and its RLIMIT_RTPRIO soft limit.

* If the RLIMIT_RTPRIO soft limit is 0, then the only permitted changes are to
lower the priority, or to switch to a non-real-time policy.

* Subject to the same rules, another unprivileged process can also make these
changes, as long as the effective user ID of the process making the change
matches the real or effective user ID of the target process.

* Special rules apply for the SCHED_IDLE: an unprivileged process operating
under this policy cannot change its policy, regardless of the value of its
RLIMIT_RTPRIO resource limit.

Privileged (CAP_SYS_NICE) processes ignore the RLIMIT_RTPRIO limit; as with
older kernels, they can make arbitrary changes to scheduling policy and
priority. See getrlimit(2) for further information on RLIMIT_RTPRIO.

Response time

       A blocked high priority process waiting for the I/O has a certain response
time before it is scheduled again. The device driver writer can greatly
reduce this response time by using a "slow interrupt" interrupt handler.

Miscellaneous

       Child processes inherit the scheduling policy and parameters across a fork(2).
The scheduling policy and parameters are preserved across execve(2).

Memory locking is usually needed for real-time processes to avoid paging
delays; this can be done with mlock(2) or mlockall(2).

Since a non-blocking infinite loop in a process scheduled under SCHED_FIFO or
SCHED_RR will block all processes with lower priority forever, a software
developer should always keep available on the console a shell scheduled under
a higher static priority than the tested application. This will allow an
emergency kill of tested real-time applications that do not block or terminate
as expected. See also the description of the RLIMIT_RTTIME resource limit in
getrlimit(2).

POSIX systems on which sched_setscheduler() and sched_getscheduler() are
available define _POSIX_PRIORITY_SCHEDULING in .

RETURN VALUE top

       On success, sched_setscheduler() returns zero.  On success,
sched_getscheduler() returns the policy for the process (a non-negative
integer). On error, -1 is returned, and errno is set appropriately.

ERRORS top

       EINVAL The scheduling policy is not one of the recognized policies, or param
does not make sense for the policy.

EPERM The calling process does not have appropriate privileges.

ESRCH The process whose ID is pid could not be found.

CONFORMING TO

       POSIX.1-2001 (but see BUGS below).  The SCHED_BATCH and SCHED_IDLE policies
are Linux-specific.

NOTES

       POSIX.1 does not detail the permissions that an unprivileged process requires
in order to call sched_setscheduler(), and details vary across systems. For
example, the Solaris 7 manual page says that the real or effective user ID of
the calling process must match the real user ID or the save set-user-ID of the
target process.

Originally, Standard Linux was intended as a general-purpose operating system
being able to handle background processes, interactive applications, and less
demanding real-time applications (applications that need to usually meet
timing deadlines). Although the Linux kernel 2.6 allowed for kernel
preemption and the newly introduced O(1) scheduler ensures that the time
needed to schedule is fixed and deterministic irrespective of the number of
active tasks, true real-time computing was not possible up to kernel version
2.6.17.

Real-time features in the mainline Linux kernel

       From kernel version 2.6.18 onwards, however, Linux is gradually becoming
equipped with real-time capabilities, most of which are derived from the
former realtime-preempt patches developed by Ingo Molnar, Thomas Gleixner,
Steven Rostedt, and others. Until the patches have been completely merged
into the mainline kernel (this is expected to be around kernel version
2.6.30), they must be installed to achieve the best real-time performance.
These patches are named:

patch-kernelversion-rtpatchversion

and can be downloaded from
http://www.kernel.org/pub/linux/kernel/projects/rt/.

Without the patches and prior to their full inclusion into the mainline
kernel, the kernel configuration offers only the three preemption classes
CONFIG_PREEMPT_NONE, CONFIG_PREEMPT_VOLUNTARY, and CONFIG_PREEMPT_DESKTOP
which respectively provide no, some, and considerable reduction of the worst-
case scheduling latency.

With the patches applied or after their full inclusion into the mainline
kernel, the additional configuration item CONFIG_PREEMPT_RT becomes available.
If this is selected, Linux is transformed into a regular real-time operating
system. The FIFO and RR scheduling policies that can be selected using
sched_setscheduler() are then used to run a process with true real-time
priority and a minimum worst-case scheduling latency.

Scheduling Policies

These policies either require superuser privilege (ie. run as root) or realtime capabilities for unprivileged users in the form of a PAM module. They include SCHED_FIFO and SCHED_RR.


REALTIME POLICIES


SCHED_FIFO

These processes schedule according to their realtime priority which is unrelated to the nice value. The highest priority process runs indefinitely, never releasing the cpu except to an even higher priority realtime task or voluntarily. Only proper realtime code should ever use this policy as the potential for hardlocking a machine is high if the process runs away. Audio applications for professional performance such as jack use this policy.


SCHED_RR


These run similar to SCHED_FIFO except that if more than one process has the same realtime priority, they will run for short periods each and share the cpu.

NON REALTIME POLICIES

These policies do not require special privileges to use and include SCHED_NORMAL and SCHED_BATCH in mainline and -ck. -ck also includes two extra unprivileged policies, SCHED_ISO and SCHED_IDLEPRIO.

SCHED_NORMAL

This is how most normal applications are run. The amount of cpu each process consumes and the latency it will get is mostly determined by the 'nice' value. They run for short periods and share cpu amongst all other processes running with the same policy, across all nice values. Known as SCHED_OTHER in most of the rest of the world, including glibc headers as per POSIX.1.

SCHED_BATCH


Similar to SCHED_NORMAL in every way except that specifying a task as batch means you are telling the kernel that this process should not ever be considered an interactive task. This means that you want these tasks to get the same share of cpu time as the same nice level SCHED_NORMAL tasks would have, but you do not care what latency they have.


SCHED_ISO

Unique to -ck this is a scheduling policy designed for pseudo-realttime scheduling without requiring superuser privileges (unlike SCHED_RR and SCHED_FIFO). When scheduled SCHED_ISO, a task can receive very low latency scheduling, and can take the full cpu like SCHED_RR, but unlike the realtime tasks they cannot starve the machine as an upper limit to their cpu usage is specified in a tunable (see below). It is designed for realtime like behaviour without risk to hanging for programs not really coded safely enough to be run realtime such as ordinary audio and video playback software. SCHED_ISO does not take a realtime priority, but nice levels like other normal tasks, although the nice value is largely ignored except when the task uses more than its cpu limit.

SCHED_IDLEPRIO


Also unique to -ck this is a scheduling policy designed for tasks to purely use idle time. This means that if anything else is running, these tasks will not even progress at all. This policy is useful for performing prolonged computing tasks such as distributed computing clients (setiathome, foldingathome, etc) and prevents these tasks from stealing away any cpu time from other applications. Also it can be useful for conducting large compilations of software without affecting other tasks. These tasks are also flagged in -ck for less aggressive memory usage and disk read bandwidth, but these affects are not potent, and if the task uses a lot of memory and disk it will be noticeable. SCHED_IDLEPRIO takes a nice value. However this nice value only determines the cpu distribution between all idleprio tasks, allowing many idleprio tasks to be running with different nice values. This might be desirable, for example, when using a distributed computing client at nice 19 and compiling software at nice 0 when both are SCHED_IDLEPRIO.


Monday, September 7, 2009

Dynamically initialize 2D array in ruby

#!/usr/bin/ruby

class Array2D
def initialize(width, height)
@data = Array.new(width) { Array.new(height) }
end
def [](x, y)
@data[x][y]
end
def []=(x, y, value)
@data[x][y] = value
end
end

puts("please enter the size of square matrix")
$num=gets.chomp.to_i
arr = Array2D.new($num, $num)

for j in 0..$num-1

for k in 0..$num-1
arr[j, k] = 0
end
arr[j, k]= 0
end

Friday, September 4, 2009

Why SystemC instead of C++ ?

The C++ language is based on sequential programming. Consequently it is not suited for the modeling of concurrent activities. Furthermore most system and hardware models require a notion of delays, clocks or time. features which are not present in C++ as a software programming language. As a result, complex and detailed systems cannot be easily described in C++ alone. Additionally communication mechanisms used in hardware models, such as signals and ports, are very different from those used in software programming. Lastly, the data types found in C++ are too remote from the actual hardware implementation.
Ultimately, new dedicated data types and communication mechanisms have to be provided.

Wednesday, September 2, 2009

SystemC

Why systemC?

SystemC is replacing the specially designed HDL's like Verilog and VHDL in many situations. This does not mean that these HDL's are obsolete now, instead, systemC supports a new approach to design a system.

The systemC born because of the necessities of the current electronic industry: Electronic gadgets are incorporating greater and greater functionality today, but not compromising with the time to produce and market the gadgets. For example, you want your mobile handset to have internet facility but you are not ready to wait for one year for that facility to come. It is easy for you to demand, but it is not so easy for electronic design engineers who design the system. The greater complexity of the future systems are making the situation still worst. Previously, the C (or C++) was used to write the software part of the design. For hardware part any of the existing HDL's were used to design the hardware. It was very difficult to setup a testbench which is common for both, since they are entirely different languages. The introduction of systemC solved many of these problems.

The systemC is nothing but a C++ class library specially designed for system design. This is an open source ware, maintained by OSCI (Open source SystemC Initiative). (visit http://systemc.org for more details.)


The advantages of using systemC are:

1. It inherits all the features of C++, which is a stable programming language accepted all over the world. It has got large language constructs, which makes easier to write the program with less efforts.

2. Rich in data types: along with the types supported by C++, systemC supports the use of special data types which are often used by the hardware engineers.

3. It comes with a strong simulation kernel to enable the designers to write good test benches easily, and to simulate it. This is so important because the functional verification at the system level saves a lot of money and time.

4. It introduces the notion of time to C++, to simulate synchronous hardware designs. This is common in most of the HDL's.

5. While most of the HDL's support the RTL level of design, systemC supports the design at an higher abstraction level. This enables large systems to be modeled easily without worrying the implementation of it. It also supports RTL design, and this subset is usually called as systemC RTL.

6. Concurrency: To simulate the concurrent behavior of the digital hardware, the simulation kernel is is so designed that all the 'processes' are executed concurrently, irrespective of the order in which they are called.

Monday, August 31, 2009

How to create doubly linklist using template in c++

First create list.h file and put this code in list.h

#include "list.h"
#include
using namespace std;

template<> class Lnode
{
public:
T data;
Lnode<> *next;
Lnode<> *prev;

};


template<> class List
{
public:
void add( T data );
void display();
T remove();
Lnode<>* head;
List()
{
head = new Lnode<>();
head->next = NULL;
head->prev = NULL;
}
};

template<> void List<>::add( T data )
{
Lnode<> *p;
p = new Lnode<>();
p->data = data;
p->next = head->next;
p->prev = head->prev;
head->next = p;
}

template<> T List<>::remove()
{
T data;
Lnode<> *node;

if (head->next == NULL)
{
cout << "ERROR: `remove' called with empty list.\n"; exit(1); } node = head->next;
data = node->data;

head->next = node->next;
head->prev = node->prev;
delete node;

return data;
}

template<> void List<>::display()
{
T data;
Lnode<> *node;

while(head->next!= NULL)
{

head=head->next;
cout <<>data;
cout << "\n"; } }

Next create linklist_template.cpp and put this code in linklist_template.cpp.

#include "list.h"
#include
using namespace std;

int main()
{
List<> list1;
List<> list2;

list1.add( 5 );
list1.add( 25 );
list1.add( 35 );
list1.add( 45 );
list2.add( 2.7 );

list1.display();

cout << list1.remove();
cout << "\n";
cout << list1.remove();
cout << "\n";
cout << list1.remove();
cout << "\n";

l

return 0;
}

kernel 2.6 scheduling time complexity o(1)

The highest priority level, with at-least ONE task in it, is selected. This takes a fixed time ( say t1 )

The first task ( head of the doubly-linked list ) in this priority level is allowed to run. This takes a fixed time ( say t2 )

Total time taken for selecting a new process is t = t1 + t2 ( Fixed )

The time taken for selecting a new process will be fixed( constant time irrespective of number of tasks )

Deterministic algorithm !!



Kernel 2.4 had Global runqueue. All CPUs had to wait for other CPUs to finish execution.

An O(n) scheduler.
In 2.4, the scheduler used to go through the entire “ global runqueue” to determine the next task to be run.

This was an O(n) algorithm where 'n' is the number of processes. The time taken was proportional to the number of active processes in the system.

This lead to large performance hits during heavy workloads.

Saturday, August 29, 2009

Manual boot in GRUB

From the GRUB command-line, you can boot a specific kernel with a named initrd image as follows:

grub$ kernel /bzImage-2.6.30

grub$ initrd /initrd-2.6.14.2.img

grub$ boot

Message display : Uncompressing Linux... Ok, booting the kernel.

Ajax (Asynchronous Javascript + XML)

Brief history

Ajax is only a name given to a set of tools that were previously existing.
The main part is XMLHttpRequest, a server-side object usable in JavaScript, that was implemented into Internet Explorer since the 4.0 version. In Internet Explorer it is an ActiveX object that was first named XMLHTTP some times, before to be generalized on all browser under the name XMLHttpRequest, when the Ajax technology becomes commonly used. The use of XMLHttpRequest in 2005 by Google, in Gmail and GoogleMaps has contributed to the success of this format. But this is the when the name Ajax was itself coined that the technology started to be so popular.


Why use Ajax?

Mainly to build a fast, dynamic website, but also to save resources.
For improving sharing of resources, it is better to use the power of all the client computers rather than just a unique server and network. Ajax allows to perform processing on client computer (in JavaScript) with data taken from the server.
The processing of web page formerly was only server-side, using web services or PHP scripts, before the whole page was sent within the network.
But Ajax can selectively modify a part of a page displayed by the browser, and update it without the need to reload the whole document with all images, menus, etc...
For example, fields of forms, choices of user, may be processed and the result displayed immediately into the same page.

What is Ajax in depth?

Ajax is a set of technologies, supported by a web browser, including these elements:

  • HTML and CSS for presenting.
  • JavaScript (ECMAScript) for local processing, and DOM (Document Object Model) to access data inside the page or to access elements of XML file read on the server (with the getElementByTagName method for example)...
  • The XMLHttpRequest object is used to read or send data on the server asynchronously.
Optionally...
  • DOMParser may be used
  • PHP or another scripting language may be used on the server.
  • XML and XSLT to process the data if returned in XML form.
  • SOAP may be used to dialog with the server.

The "asynchronous" word, means that the response of the server while be processed when available, without to wait and to freeze the display of the page.


How does it works?

Ajax uses a programming model with display and events. These events are user actions, they call functions associated to elements of the web page.
Interactivity is achieved with forms and buttons. DOM allows to link elements of the page with actions and also to extract data from XML files provided by the server.
To get data on the server, XMLHttpRequest provides two methods:
- open: create a connection.
- send: send a request to the server.
Data furnished by the server will be found in the attributes of the XMLHttpRequest object:
- responseXml for an XML file or
- responseText for a plain text.

Take note that a new XMLHttpRequest object has to be created for each new file to load.

We have to wait for the data to be available to process it, and in this purpose, the state of availability of data is given by the readyState attribute of XMLHttpRequest.

States of readyState follow (only the last one is really useful):

0: not initialized.
1: connection established.
2: request received.
3: answer in process.
4: finished.

The XMLHttpRequest object

Allows to interact with the servers, thanks to its methods and attributes.

Attributes

readyState the code successively changes value from 0 to 4 that means for "ready".
status 200 is OK
404 if the page is not found.
responseText holds loaded data as a string of characters.
responseXml holds an XML loaded file, DOM's method allows to extract data.
onreadystatechange property that takes a function as value that is invoked when the readystatechange event is dispatched.

Methods

open(mode, url, boolean) mode: type of request, GET or POST
url: the location of the file, with a path.
boolean: true (asynchronous) / false (synchronous).
optionally, a login and a password may be added to arguments.
send("string") null for a GET command.







Building a request, step by step

First step: create an instance

This is just a classical instance of class, but two options must be tried, for browser compatibility.

if (window.XMLHttpRequest) // Object of the current windows { xhr = new XMLHttpRequest(); // Firefox, Safari, ... } else if (window.ActiveXObject) // ActiveX version { xhr = new ActiveXObject("Microsoft.XMLHTTP"); // Internet Explorer }

Or exceptions may be used instead:

try { xhr = new ActiveXObject("Microsoft.XMLHTTP"); // Trying Internet Explorer } catch(e) // Failed { xhr = new XMLHttpRequest(); // Other browsers. }

Second step: wait for the response

The response and further processing are included in a function and the return of the function will be assigned to the onreadystatechange attribute of the object previously created.

xhr.onreadystatechange = function() { // instructions to process the response }; if (xhr.readyState == 4) { // Received, OK } else { // Wait... }

Third step: make the request itself

Two methods of XMLHttpRequest are used:
- open: command GET or POST, URL of the document, true for asynchronous.
- send: with POST only, the data to send to the server.

The request below read a document on the server.

xhr.open('GET', 'http://www.xul.fr/somefile.xml', true); xhr.send(null);













Inside the Linux boot process

The process of booting a Linux system consists of a number of stages. But whether you're booting as standard x86 desktop or a deeply embedded PowerPC target, much of the flow is surprisingly similar. kernel decompression, the initial RAM disk, and other elements of Linux boot.

In the early days, bootstrapping a computer meant feeding a paper tape containing a boot program or manually loading a boot program using the front panel address/data/control switches. Today's computers are equipped with facilities to simplify the boot process, but that doesn't necessarily make it simple.

Let's start with a high-level view of Linux boot so you can see the entire landscape. Then we'll review what's going on at each of the individual steps. Source references along the way will help you navigate the kernel tree and dig in.

When a system is first booted, or is reset, the processor executes code at a well-known location. In a personal computer (PC), this location is in the basic input/output system (BIOS), which is stored in flash memory on the motherboard. The central processing unit (CPU) in an embedded system invokes the reset vector to start a program at a known address in flash/ROM. In either case, the result is the same. Because PCs offer so much flexibility, the BIOS must determine which devices are candidates for boot. We'll look at this in more detail later.

When a boot device is found, the first-stage boot loader is loaded into RAM and executed. This boot loader is less than 512 bytes in length (a single sector), and its job is to load the second-stage boot loader.

When the second-stage boot loader is in RAM and executing, a splash screen is commonly displayed, and Linux and
an optional initial RAM disk (temporary root file system) are loaded into memory. When the images are loaded, the
second-stage boot loader passes control to the kernel image and the kernel is decompressed and initialized. At this stage, the second-stage boot loader checks the system hardware, enumerates the attached hardware devices, mounts the root device, and then loads the necessary kernel modules. When complete, the first user-space program (init) starts, and high-level system initialization is performed.

That's Linux boot in a nutshell. Now let's dig in a little further and explore some of the details of the Linux boot process.

System startup

The system startup stage depends on the hardware that Linux is being booted on. On an embedded platform, a bootstrap environment is used when the system is powered on, or reset. Examples include U-Boot, RedBoot, and MicroMonitor from Lucent. Embedded platforms are commonly shipped with a boot monitor. These programs reside in special region of flash memory on the target hardware and provide the means to download a Linux kernel image into flash memory and subsequently execute it. In addition to having the ability to store and boot a Linux image, these boot monitors perform some level of system test and hardware initialization. In an embedded target, these boot monitors commonly cover both the first- and second-stage boot loaders.

In a PC, booting Linux begins in the BIOS at address 0xFFFF0. The first step of the BIOS is the power-on self test (POST). The job of the POST is to perform a check of the hardware. The second step of the BIOS is local device enumeration and initialization.

Given the different uses of BIOS functions, the BIOS is made up of two parts: the POST code and runtime services. After the POST is complete, it is flushed from memory, but the BIOS runtime services remain and are available to the target operating system.

Commonly, Linux is booted from a hard disk, where the Master Boot Record (MBR) contains the primary boot loader. The MBR is a 512-byte sector, located in the first sector on the disk (sector 1 of cylinder 0, head 0). After the MBR is loaded into RAM, the BIOS yields control to it.


Stage 1 boot loader

The primary boot loader that resides in the MBR is a 512-byte image containing both program code and a small partition table (see Figure 2). The first 446 bytes are the primary boot loader, which contains both executable code and error message text. The next sixty-four bytes are the partition table, which contains a record for each of four partitions (sixteen bytes each). The MBR ends with two bytes that are defined as the magic number (0xAA55). The magic number serves as a validation check of the MBR.

The job of the primary boot loader is to find and load the secondary boot loader (stage 2). It does this by looking through the partition table for an active partition. When it finds an active partition, it scans the remaining partitions in the table to ensure that they're all inactive. When this is verified, the active partition's boot record is read from the device into RAM and executed.

Stage 2 boot loader

The secondary, or second-stage, boot loader could be more aptly called the kernel loader. The task at this stage is to load the Linux kernel and optional initial RAM disk.
The first- and second-stage boot loaders combined are called Linux Loader (LILO) or GRand Unified Bootloader (GRUB) in the x86 PC environment. Because LILO has some disadvantages that were corrected in GRUB, let's look into GRUB. (See many additional
resources on GRUB, LILO, and related topics in the Resources section
later in this article.)



The great thing about GRUB is that it includes knowledge of Linux file systems. Instead of using raw sectors on the disk, as LILO does, GRUB can load a Linux kernel from an ext2 or ext3 file system. It does this by making the two-stage boot loader into a three-stage boot loader. Stage 1 (MBR) boots a stage 1.5 boot loader that understands the particular file system containing the Linux kernel image. Examples include
reiserfs_stage1_5 (to load from a Reiser journaling file system) or e2fs_stage1_5 (to load from an ext2 or ext3 file system). When the stage 1.5 boot loader is loaded and running, the stage 2 boot loader can be loaded. With stage 2 loaded, GRUB can, upon request, display a list of available kernels (defined in /etc/grub.conf, with soft links from /etc/grub/menu.lst and /etc/grub.conf). You can select a kernel and even amend it with
additional kernel parameters. Optionally, you can use a command-line shell for greater manual control over the boot process.

With the second-stage boot loader in memory, the file system is consulted, and the default kernel image and initrd image are loaded into memory. With the images ready, the stage 2 boot loader invokes the kernel image.

Kernel

With the kernel image in memory and control given from the stage 2 boot loader, the kernel stage begins. The kernel image isn't so much an executable kernel, but a
compressed kernel image. Typically this is a zImage (compressed image, less than 512KB) or a bzImage (big compressed image, greater than 512KB), that has been
previously compressed with zlib. At the head of this kernel image is a routine that does some minimal amount of hardware setup and then decompresses the kernel contained within the kernel image and places it into high memory. If an initial RAM disk image is present, this routine moves it into memory and notes it for later use. The routine then calls the kernel and the kernel boot begins.



When the bzImage (for an i386 image) is invoked, you begin at ./arch/i386/boot/head.S in the start assembly routine (see Figure 3 for the major flow). This routine does some basic hardware setup and invokes the startup_32 routine in ./arch/i386/boot/compressed/head.S. This routine sets up a basic environment (stack, etc.) and clears the Block Started by Symbol (BSS). The kernel is then decompressed through a call to a C function called decompress_kernel (located in ./arch/i386/boot/compressed/misc.c).
When the kernel is decompressed into memory, it is called. This is yet another startup_32 function, but this function is in ./arch/i386/kernel/head.S.

In the new startup_32 function (also called the swapper or process 0), the page tables are initialized and memory paging is enabled. The type of CPU is detected along with any optional floating-point unit (FPU) and stored away for later use. The start_kernel function is then invoked (init/main.c), which takes you to the non-architecture specific Linux kernel. This is, in essence, the main function for the Linux kernel.

With the call to start_kernel, a long list of initialization functions are called to set up interrupts, perform further memory configuration, and load the initial RAM disk. In the end, a call is made to kernel_thread (in arch/i386/kernel/process.c) to start the init function, which is the first user-space process. Finally, the dle task is started and the scheduler can now take control (after the call to cpu_idle). With interrupts enabled, the pre-emptive scheduler periodically takes control to provide multitasking.

During the boot of the kernel, the initial-RAM disk (initrd) that was loaded into memory by the stage 2 boot loader is copied into RAM and mounted. This initrd serves as a temporary root file system in RAM and allows the kernel to fully boot without having to mount any physical disks. Since the necessary modules needed to interface with
peripherals can be part of the initrd, the kernel can be very small, but still support a large number of possible hardware configurations. After the kernel is booted, the root file system is pivoted (via pivot_root) where the initrd root file system is unmounted and the real root file system is mounted.



The initrd function allows you to create a small Linux kernel with drivers compiled as loadable modules. These loadable modules give the kernel the means to access disks and the file systems on those disks, as well as drivers for other hardware assets. Because the root file system is a file system on a disk, the initrd function provides a means of bootstrapping to gain access to the disk and mount the real root file system. In an embedded target without a hard disk, the initrd can be the final root file system, or the final root file system can be mounted via the Network File System (NFS).


Init

After the kernel is booted and initialized, the kernel starts the first user-space application. This is the first program invoked that is compiled with the standard C library. Prior to this point in the process, no standard C applications have been executed.

In a desktop Linux system, the first application started is commonly /sbin/init. But it need not be. Rarely do embedded systems require the extensive initialization provided by init (as configured through /etc/inittab). In many cases, you can invoke a simple shell script that starts the necessary embedded applications.

Steps for Kernel Compilation – 2.6.30

The following steps are a guideline to recompile the kernel. I have tested it today on my debian installation and i hope that it should work fine on all systems. If there are any mistakes please let me know. Remember to be a root to do the following steps. But before beginning with the following steps you need to have gcc­version­2.95­3. To install this version see the steps are

1. Step­-1

Download the kernel source code from the internet . You have to pick up the 2.6.30 kernel version. The format of the file is in tar format so you would have to untar it. You can create your own directory to untar it but i would advise to keep the kernel source in /usr/src/. After copying the tar image of the kernel, untar it by issuing the command tar – xvf linux­kernel­2.6.30.tar

2. Step-­2

Now is the time to configure the options that we have to keep in our new kernel. So go in the source code directory which would be something like /usr/src/linux­kernel­2.6.30 and key in the command make menuconfig. For running menuconfig you should have ncurses library installed, which can be done by installing libncurses package from your synaptic package managers. The make menuconfig would look something like as under:­

3. Step – 3

Selection of modules – Now we have to select the modules. Make sure that in the Processor type and features­> symmetric multiprocessing support is disabled. Selected and deselected of items is done using space bar. You can read the details of an item using inbuilt context sensitive help. Just place your cursor at the command and press ?. You would then see the context sensitive help. It causes lot of problems so remove it. After deselect this you can go back by pressing escape. Also now go to the device drivers­>character devices and deselect Direct rendering manager. This also causes a lot of troubles because of hardware incompatibility.
After these two things you can come back to the main menu by pressing escape. Here select file systems and select ext3 to be loaded as inbuilt and not a module. These changes are sufficient and rest is your own wish. After selecting and deselecting the packages, press escape again and again until it asks you to save. So say yes as the screen under.

4. Step­-4

Running makefile – Now after saving the menuconfig file, key in the command make. This would compile all the modules and all files in two stages and create configuration files.

5. Step­-5

Making Modules – After make is done now key in the command make modules_install. This command installs all the modules for the kernel in /lib/modules/2.6.30.

6. Step­-6

Now the bzImage of the kernel is created which is the bunzipped compressed image. Copy this image to boot directory by the following command cp /arch/i386/boot/bzImage /boot/vmlinuz­2.6.30.

7. Step­-7

Now is the time to create a RAMFS i.e. initrd which loads the filesystem in memory. so to get that key in the command as mkramfs ­o /boot/initrd.img­2.6.30 2.6.30.

8. Step­-8

Now type the command grub update on command prompt.

9. Step­-9

Congratulations. You can boot into the new kernel now.