Thursday, August 28, 2014

CODE CHEF: October Challenge 2013 "Maxim and Dividers"

Hello Readers,

Today i am posting the solution of a "CODECHEF" problem. The problem was asked in the CODE CHEF: October Challenge 2013, known as  "Maxim and Dividers".
Here goes the Problem Statement.

Problem Statement

    Maxim likes dividers of the numbers. Also Maxim is fond of lucky numbers of small elephant from Lviv city.

If you remember, lucky numbers are positive integers whose decimal representation contains only the lucky digits 4 and 7. For example, numbers 477444 are lucky, 517467 — aren't.

Now Maxim is interested in the next information: what is the number of the integer positive dividers of number n, which are overlucky.

We call number overlucky if it is possible to remove some, but not all, digits and during bonding the remaining digits we will receive a lucky number. For example, number 72344 — overlucky, because it is possible to remove digits 2 and 3, and get number 744, which is lucky. Number 223 isn't overlucky.

Input

    The first line of the input contains an integer T denoting the number of test cases. The description of Ttest cases follows. Single line of each test case contains an integer n.

Output

    For each test case on different lines print the answer to the problem.

Constraints

  • 1 ≤ T ≤ 10
  • 1 ≤ n ≤ 10^9

Example

Input:
10
1
2
3
4
5
6
7
8
9
10

Output:
0
0
0
1
0
0
1
1
0
0




First Try it yourself...

Here is my solution in C for the above Problem:


#include<stdio.h>
#include<stdlib.h>
 
 int checkoverlucky(int i)
 {
   int k;
   
   while(i>0)
   {
   if(i%10==4||i%10==7)
   return 1;
   i=i/10;
   }
   return 0;
 }
  
int main()
{
int T,count=0,i;
int k=0;
long int n;


scanf("%d",&T);
if(T<1||T>10)
{
printf("Error in T");
exit(0);
}

while(k++<T)
{
i=4;
count=0;
   scanf("%ld",&n);
   if(n<1||n>1000000000)
   {
   printf("Error in n");
   exit(0);
   }
   while(i<=n)
   {
   if(n%i==0)
   if(checkoverlucky(i))
   {
   count++;
   }

            i++;
   }
   printf("%d \n",count);



}

return 0;
} 


Any suggestions are most welcome.. :) 

Viva La Raza
Sid 




Wednesday, June 18, 2014

Binary Representation Of A Number using recursion

The code follows:

#include<stdio.h>

int convertIntoBinary(int n)
{
    if(n>1)
        convertIntoBinary(n/2);

    printf("%d",n%2);
}

int main()
{
    convertIntoBinary(50);
    return 0;
}


Viva La Raza
Sid

The Inspiration Of the above article has been taken from: geeksforgeeks.org

Binary Representation Of A Number

Hello my dear readers !.. Sorry for keeping you guyz  waiting for such a long time (i know i am being modest.. ;-) )..
Today's Post is all about Converting a number into its binary form.
The idea is derived from the fact that if we "AND (&) " a particular number with 1, that bit at unit's place will be extracted from the number. Similarly if we will shift 1 to it's left, the second last bit will be obtained and similarly so forth and so on. Hence the ith bit will be obained by doing AND operation of number and 1 at ith place.

The code of the program is as follows:

#include<stdio.h>

void convertIntoBinary(unsigned int n)
{
    unsigned int i=1;

    for(i=1<<31;i>0;i=i/2)
    {
        (n&i)?printf("1"):printf("0");
    }
}
int main()
{
    convertIntoBinary(50);
    return 0;
}

Any questions as well as suggestions are always welcome.
Do comment if you like the article.

Viva La Raza
Sid


The Inspiration Of the above article has been taken from: geeksforgeeks.org


Thursday, May 29, 2014

ANALYSIS AND EVOLUTION OF ACTIVE QUEUE MANAGEMENT ALGORITHMS


1. Team Member Details:


  • Name: Siddharth Nawani
  • Enrollment No: 9910103538       

2. Summary of the Project:


The project focuses on how Congestion Control and Queue Management techniques have evolved in the course of time and being modified to minimize packet loss and stabilize Queue length. Extensive relevant research has been done on the existing AQM’s and from the midst of RED, our algorithm has evolved. The key idea is to modify the existing RED algorithm to achieve stabilization in Queue length at routers by decreasing packet loss in comparison to RED. The probability marking function of RED has been modified into four different functions and the results and effects on various parameters like Queue length, throughput, packet loss etc. have been simulated and compared. The design rationale of RED has been studied and enhanced to achieve better stabilized queue and throughput.


3. Project Poster


4. You tube link of Project Demonstration: https://www.youtube.com/watch?v=MADn4tjFzKE


6. Contribution to open Source: My article on "Understanding & Modifying the dynamics of Random Early Drop AQM" 


Understanding & Modifying the dynamics of Random Early Drop(RED) AQM

The basic idea behind RED queue management is to detect incipient congestion early and to convey congestion notification to the end-hosts, allowing them to reduce their transmission rates before queues in the network overflow and packets are dropped.
To do this, RED maintain an exponentially-weighted moving average (EWMA) of the queue length which it uses to detect congestion. When the average queue length exceeds a minimum threshold (minth), packets are randomly dropped or marked with an explicit congestion notification (ECN) bit . When the average queue length exceeds a maximum threshold (maxth), all packets are dropped or marked.
We choose to study and evaluate RED-family AQM for the several reasons: First, while recently proposed AQM mechanisms such as BLUE [5], REM do not strictly use the average queue to compute congestion, they have performance goals similar to that of RED- family AQMs. Second, since RED congestion control mechanisms are relatively well-understood and are commonly used as a benchmark for evaluation of other AQM mechanisms. Third, with the help of a general modification, it is easy to configure RED- family AQMs to create various test scenarios that reveal interesting AQM congestion control issues.. Lastly, since RED is already implemented in some commercial routers, our optimized RED can be used to tune these routers, thus realizing some of the potential benefits of ECN with few network infrastructure changes.
While RED is certainly an improvement over traditional droptail queues, it has several shortcomings. One of the fundamental problems with RED is that they rely on queue length as an estimator of congestion.. Since the RED algorithm relies on queue lengths, it has an inherent problem in determining the severity of congestion. As a result, RED requires a wide range of parameters to operate correctly under different congestion scenarios. While RED can achieve an ideal operating point, it can only do so when it has a sufficient amount of buffer space and is correctly parameterized.
RED [3] was designed with the objectives to (1) minimize packet loss and queuing delay, (2) avoid global synchronization of sources, (3) maintain high link utilization, and (4) remove biases against bursty sources. The basic idea behind RED queue management is to detect incipient congestion early and to convey congestion notification to the end-hosts, allowing them to reduce their transmission rates before queues in the network overflow and packets are dropped.
To do this, RED maintain an exponentially-weighted moving average (EWMA) of the queue length which it uses to detect congestion. When the average queue length exceeds a minimum threshold (minth), packets are randomly dropped or marked with an explicit congestion notification (ECN) bit [2]. When the average queue length exceeds a maximum threshold (maxth), all packets are dropped or marked.
RED represents a class of queue management mechanisms that does not keep the state of each flow. That is, they put the data from the all the flows into one queue, and focus on their overall performance. It is that which originate the problems caused by non-responsive flows.


Our presented modified version of RED lays great importance on stability of queue, as we can see that the mechanism of RED is heavily dependent on the stability of queue length at buffer.

Modifications:

The original RED model follows a linear model. This model can be enhanced by modifying the existing linear packet probability marking into a quadratic function. The Quadratic model offer both sides of the RED.
With the one above RED(concave one) being more aggressive and hence more stable, while the one below RED(convex one) offers less stabilization and packet loss.
The lower to RED convex quadratic model offers services equal to RED in different Scenarios, while performing better in some and also worst in some.
But the concave quadratic model appears to be a winner over RED, being more stabilized and having reduced packet loss.   



More on this topic in further posts.

Viva La Raza
Siddharth

Friday, January 3, 2014

Largest Contiguous Sum of Sub Array

Today's Post is about finding the Largest Contiguous Sum of A sub Array.

The Approach which i will use is called KADANE'S ALGORITHM

The very simple yet intuitive and efficient Kadane's algorithm works as follows:

Keep adding all the elements of array and keep the track of two things first whether the sum we are adding is positive or not,if its negative then again initialize it to zero. Second whether the sum calculated so is the biggest/largest of all or not.

Here's the code :

#include<stdio.h>

void findsum(int arr[],int n)
{
    int i=0,max_so_far=0,max_ending_here=0;

    for(i-0;i<n;i++)
    {
     max_ending_here=max_ending_here+arr[i];
     if(max_ending_here<0)
        max_ending_here=0;
     if(max_ending_here>max_so_far)
     max_so_far=max_ending_here;

    }
    printf("%d",max_so_far);
}

int main()
{
    int arr[]= {-2, -3, 4, -1, -2, 1, 5, -3};
    int n=sizeof(arr)/sizeof(arr[0]);
    findsum(arr,n);
    return 0;
}


Viva La Raza
Sid

Finding Minimum Element In sorted Rotated Array

In Today's Post we will discuss a program whose objective is to find a minimum element in an array but what's the fun in doing that.... so the twist is that the array is rotated.

Approach:

The Approach is similar to binary search with some extra conditions to check whether the array is left rotated, right rotated or not rotated at all.
the above mentioned things can be done in O(log n) time but for handling duplicates an extra condition need to be added which extends the time complexity of the program to O(n).

Here's the code:

#include<stdio.h>

int min(int x,int y)
{
 return (x>y)? y : x ;
}

int findmin(int arr[],int low,int high)
{
    if(high<low)
    return arr[0];

    if(low==high)
        return arr[low];
    int mid=low+(high-low)/2;

    if(mid<high && arr[mid+1]<arr[mid])
        return arr[mid+1];
    if(arr[mid]==arr[low]&&arr[mid]==arr[high])
        return min(findmin(arr,low,mid-1),findmin(arr,mid+1,high));
    if(low<mid && arr[mid-1]>arr[mid])
        return arr[mid];
    if(arr[mid]<arr[high])
        return findmin(arr,low,mid-1);
    return findmin(arr,mid+1,high);
}
int main()
{
    int arr[] =  {3, 3, 3, 4, 4, 4, 4, 5, 3, 3};
    int n=sizeof(arr)/sizeof(arr[0]);
    int k=findmin(arr,0,n-1);
    printf("%d",k);
    return 0;
}


Do comment.. :)
Viva La Raza
Sid

C Program To Move All Zeros To the End Of the Array

This is a simple program so let's code it.

#include<stdio.h>

void moveToEnd(int arr[],int n)
{
 int pos=0,i=0;

 while(i<n)
 {
     if(arr[i]!=0)
     arr[pos++]=arr[i++];
     else
        i++;
 }
     while(pos<n)
        arr[pos++]=0;
}

int main()
{
    int arr[] = {1, 9, 8, 4, 0, 0, 2, 7, 0, 6, 0, 9};
    int n,i;
    n=sizeof(arr)/sizeof(arr[0]);
    moveToEnd(arr,n);
    for(i=0;i<n;i++)
        printf(" %d",arr[i]);
    return 0;
}

Count all pairs with a given difference

Today's post is sort of an extension of previous post.

Here We will discuss approaches to count all distinct pairs with a given difference k.

Example to understand the question better:

Input: arr[] = {1, 5, 3, 4, 2}, k = 3
Output: 2
There are 2 pairs with difference 3, the pairs are {1, 4} and {5, 2}


The algorithm is similar to the previous one only so, i will carry forward to the code. :)

Few points relating to aforementioned code:

1. The sorting can be done in nlog n time complexity.

2, Instead of binary search HASHING can also be used to find elements but time complexity will remain  O(n logn).

3, AVL Trees can also be used to counter this problem. Instead of sorting the elements an AVL tree can be made and maintained and for each element arr[i]+k can be searched in O(log n) time.But the overall time complexity will still remain O(n logn).

#include<stdio.h>
#include<stdlib.h>

int compare(const void* a,const void* b)
{
return (*(int*)a - *(int*)b);
}

int binarySearch(int arr[],int low,int high,int k)
{
    int mid= low+((high-low)/2);
    if(high>=low)
    {

    if(arr[mid]==k)
        return mid;
    if(arr[mid]>k)
        return binarySearch(arr,low,mid-1,k);
    if(arr[mid]<k)
        return binarySearch(arr,mid+1,high,k);
    }
    return -1;
}
void countPairs(int arr[], int n,int k)
{
    int i=0,pairs=0;
    qsort(arr,n,sizeof(int),compare); //for sorting the elements of the array
    for(i=0;i<n;i++)
       {
           if(binarySearch(arr,i+1,n-1,arr[i]+k)!=-1)
            pairs++;
       }

printf("No. of distinct pairs with difference as %d are %d",k,pairs);


}
int main()
{
    int arr[]= {1, 5, 3, 4, 2};
    int k=3;
    int n=sizeof(arr)/sizeof(arr[0]);
    countPairs(arr,n,k);
    return 0;

}

Comments and suggestions are welcome..:)

Viva La Raza
Sid

Find a Pair with a given difference k

Happy new year Everyone.. :) Lets start coding..

Today's post is all about finding a pair of numbers in a given unsorted array, which has a difference k.

The simplest approach can be to run two loops.The Outer loops picks each element one by one and run an inner loop to compare it with each element and find if the difference between them is k.
Complexity: O(n^2)

Can we improve it... Yes.

Algorithm#1:

1. The array can be first sorted.  // complexity n logn
2. Binary search can be implemented for each element to find arr[i]+k in [i+1...n-1]. //complexity n log n

Complexity: O(n logn)

can we improve further ..Yes..

The second step can be improved to O(n). Lets see the approach.

Maintain two variables i=0 and j=1, initialize them respectively  Calculate their difference, if the diff is less than k, then increment j else increment i, till we get the exact difference or we run out of numbers.... :)

Simple yet intuitive approach..
Lets code it..

#include<stdio.h>

void findPair(int arr[],int n,int k)
{
    int i=0,j=1;

    while(i<n && j<n)
    {
        if(i!=j && arr[i]-arr[j]==k || arr[j]-arr[i]==k)
        {
            printf("Pair found %d %d",arr[i],arr[j]);
            //i++;
            return;
        }

        if(arr[j]-arr[i] > k)
            i++;
        else if(arr[j]-arr[i] < k)
            j++;

    }
    printf("No pair found");
}
int main()
{
    int arr[] = {1, 8, 30, 40, 100,160};
    int k=60,n;
    n=sizeof(arr)/sizeof(arr[0]);
    findPair(arr,n,k);
    return 0;
}

Comments and suggestions are welcome.. :)

Viva La Raza
Sid