Wednesday, June 18, 2014

Binary Representation Of A Number using recursion

The code follows:

#include<stdio.h>

int convertIntoBinary(int n)
{
    if(n>1)
        convertIntoBinary(n/2);

    printf("%d",n%2);
}

int main()
{
    convertIntoBinary(50);
    return 0;
}


Viva La Raza
Sid

The Inspiration Of the above article has been taken from: geeksforgeeks.org

Binary Representation Of A Number

Hello my dear readers !.. Sorry for keeping you guyz  waiting for such a long time (i know i am being modest.. ;-) )..
Today's Post is all about Converting a number into its binary form.
The idea is derived from the fact that if we "AND (&) " a particular number with 1, that bit at unit's place will be extracted from the number. Similarly if we will shift 1 to it's left, the second last bit will be obtained and similarly so forth and so on. Hence the ith bit will be obained by doing AND operation of number and 1 at ith place.

The code of the program is as follows:

#include<stdio.h>

void convertIntoBinary(unsigned int n)
{
    unsigned int i=1;

    for(i=1<<31;i>0;i=i/2)
    {
        (n&i)?printf("1"):printf("0");
    }
}
int main()
{
    convertIntoBinary(50);
    return 0;
}

Any questions as well as suggestions are always welcome.
Do comment if you like the article.

Viva La Raza
Sid


The Inspiration Of the above article has been taken from: geeksforgeeks.org


Thursday, May 29, 2014

ANALYSIS AND EVOLUTION OF ACTIVE QUEUE MANAGEMENT ALGORITHMS


1. Team Member Details:


  • Name: Siddharth Nawani
  • Enrollment No: 9910103538       

2. Summary of the Project:


The project focuses on how Congestion Control and Queue Management techniques have evolved in the course of time and being modified to minimize packet loss and stabilize Queue length. Extensive relevant research has been done on the existing AQM’s and from the midst of RED, our algorithm has evolved. The key idea is to modify the existing RED algorithm to achieve stabilization in Queue length at routers by decreasing packet loss in comparison to RED. The probability marking function of RED has been modified into four different functions and the results and effects on various parameters like Queue length, throughput, packet loss etc. have been simulated and compared. The design rationale of RED has been studied and enhanced to achieve better stabilized queue and throughput.


3. Project Poster


4. You tube link of Project Demonstration: https://www.youtube.com/watch?v=MADn4tjFzKE


6. Contribution to open Source: My article on "Understanding & Modifying the dynamics of Random Early Drop AQM" 


Understanding & Modifying the dynamics of Random Early Drop(RED) AQM

The basic idea behind RED queue management is to detect incipient congestion early and to convey congestion notification to the end-hosts, allowing them to reduce their transmission rates before queues in the network overflow and packets are dropped.
To do this, RED maintain an exponentially-weighted moving average (EWMA) of the queue length which it uses to detect congestion. When the average queue length exceeds a minimum threshold (minth), packets are randomly dropped or marked with an explicit congestion notification (ECN) bit . When the average queue length exceeds a maximum threshold (maxth), all packets are dropped or marked.
We choose to study and evaluate RED-family AQM for the several reasons: First, while recently proposed AQM mechanisms such as BLUE [5], REM do not strictly use the average queue to compute congestion, they have performance goals similar to that of RED- family AQMs. Second, since RED congestion control mechanisms are relatively well-understood and are commonly used as a benchmark for evaluation of other AQM mechanisms. Third, with the help of a general modification, it is easy to configure RED- family AQMs to create various test scenarios that reveal interesting AQM congestion control issues.. Lastly, since RED is already implemented in some commercial routers, our optimized RED can be used to tune these routers, thus realizing some of the potential benefits of ECN with few network infrastructure changes.
While RED is certainly an improvement over traditional droptail queues, it has several shortcomings. One of the fundamental problems with RED is that they rely on queue length as an estimator of congestion.. Since the RED algorithm relies on queue lengths, it has an inherent problem in determining the severity of congestion. As a result, RED requires a wide range of parameters to operate correctly under different congestion scenarios. While RED can achieve an ideal operating point, it can only do so when it has a sufficient amount of buffer space and is correctly parameterized.
RED [3] was designed with the objectives to (1) minimize packet loss and queuing delay, (2) avoid global synchronization of sources, (3) maintain high link utilization, and (4) remove biases against bursty sources. The basic idea behind RED queue management is to detect incipient congestion early and to convey congestion notification to the end-hosts, allowing them to reduce their transmission rates before queues in the network overflow and packets are dropped.
To do this, RED maintain an exponentially-weighted moving average (EWMA) of the queue length which it uses to detect congestion. When the average queue length exceeds a minimum threshold (minth), packets are randomly dropped or marked with an explicit congestion notification (ECN) bit [2]. When the average queue length exceeds a maximum threshold (maxth), all packets are dropped or marked.
RED represents a class of queue management mechanisms that does not keep the state of each flow. That is, they put the data from the all the flows into one queue, and focus on their overall performance. It is that which originate the problems caused by non-responsive flows.


Our presented modified version of RED lays great importance on stability of queue, as we can see that the mechanism of RED is heavily dependent on the stability of queue length at buffer.

Modifications:

The original RED model follows a linear model. This model can be enhanced by modifying the existing linear packet probability marking into a quadratic function. The Quadratic model offer both sides of the RED.
With the one above RED(concave one) being more aggressive and hence more stable, while the one below RED(convex one) offers less stabilization and packet loss.
The lower to RED convex quadratic model offers services equal to RED in different Scenarios, while performing better in some and also worst in some.
But the concave quadratic model appears to be a winner over RED, being more stabilized and having reduced packet loss.   



More on this topic in further posts.

Viva La Raza
Siddharth

Friday, January 3, 2014

Largest Contiguous Sum of Sub Array

Today's Post is about finding the Largest Contiguous Sum of A sub Array.

The Approach which i will use is called KADANE'S ALGORITHM

The very simple yet intuitive and efficient Kadane's algorithm works as follows:

Keep adding all the elements of array and keep the track of two things first whether the sum we are adding is positive or not,if its negative then again initialize it to zero. Second whether the sum calculated so is the biggest/largest of all or not.

Here's the code :

#include<stdio.h>

void findsum(int arr[],int n)
{
    int i=0,max_so_far=0,max_ending_here=0;

    for(i-0;i<n;i++)
    {
     max_ending_here=max_ending_here+arr[i];
     if(max_ending_here<0)
        max_ending_here=0;
     if(max_ending_here>max_so_far)
     max_so_far=max_ending_here;

    }
    printf("%d",max_so_far);
}

int main()
{
    int arr[]= {-2, -3, 4, -1, -2, 1, 5, -3};
    int n=sizeof(arr)/sizeof(arr[0]);
    findsum(arr,n);
    return 0;
}


Viva La Raza
Sid

Finding Minimum Element In sorted Rotated Array

In Today's Post we will discuss a program whose objective is to find a minimum element in an array but what's the fun in doing that.... so the twist is that the array is rotated.

Approach:

The Approach is similar to binary search with some extra conditions to check whether the array is left rotated, right rotated or not rotated at all.
the above mentioned things can be done in O(log n) time but for handling duplicates an extra condition need to be added which extends the time complexity of the program to O(n).

Here's the code:

#include<stdio.h>

int min(int x,int y)
{
 return (x>y)? y : x ;
}

int findmin(int arr[],int low,int high)
{
    if(high<low)
    return arr[0];

    if(low==high)
        return arr[low];
    int mid=low+(high-low)/2;

    if(mid<high && arr[mid+1]<arr[mid])
        return arr[mid+1];
    if(arr[mid]==arr[low]&&arr[mid]==arr[high])
        return min(findmin(arr,low,mid-1),findmin(arr,mid+1,high));
    if(low<mid && arr[mid-1]>arr[mid])
        return arr[mid];
    if(arr[mid]<arr[high])
        return findmin(arr,low,mid-1);
    return findmin(arr,mid+1,high);
}
int main()
{
    int arr[] =  {3, 3, 3, 4, 4, 4, 4, 5, 3, 3};
    int n=sizeof(arr)/sizeof(arr[0]);
    int k=findmin(arr,0,n-1);
    printf("%d",k);
    return 0;
}


Do comment.. :)
Viva La Raza
Sid

C Program To Move All Zeros To the End Of the Array

This is a simple program so let's code it.

#include<stdio.h>

void moveToEnd(int arr[],int n)
{
 int pos=0,i=0;

 while(i<n)
 {
     if(arr[i]!=0)
     arr[pos++]=arr[i++];
     else
        i++;
 }
     while(pos<n)
        arr[pos++]=0;
}

int main()
{
    int arr[] = {1, 9, 8, 4, 0, 0, 2, 7, 0, 6, 0, 9};
    int n,i;
    n=sizeof(arr)/sizeof(arr[0]);
    moveToEnd(arr,n);
    for(i=0;i<n;i++)
        printf(" %d",arr[i]);
    return 0;
}