Saturday, November 9, 2019

Spring MVC Basic Configuration Using Annotations

1. Create a class having the following annotations :

@Configuration
@EnableWebMvc
@ComponentScan("com.XXX")
@EnableAspectJAutoProxy(proxyTargetClass = false)
@PropertySource(value = "classpath:/application.properties")

Below sample class is an example for same :

@Configuration
@EnableWebMvc
@ComponentScan("com.xxx")
@EnableAspectJAutoProxy(proxyTargetClass = false)
@PropertySource(value = "classpath:/application.properties")
public class UtilityServiceConfiguration {

private final static Logger LOG = Logger.getLogger(UtilityServiceConfiguration.class);

@Autowired
private Environment env;

@Value("${application.env}")
private String server;

@Bean
public RestTemplate getRestTemplate() {
RestTemplate restTemplate = new RestTemplate();
return restTemplate;
}

// To resolve ${} in @Value
@Bean
public static PropertySourcesPlaceholderConfigurer propertyConfigInDev() {
return new PropertySourcesPlaceholderConfigurer();
}

@Bean
public CommonsMultipartResolver multipartResolver() {
CommonsMultipartResolver resolver = new CommonsMultipartResolver();
resolver.setDefaultEncoding("utf-8");
resolver.setMaxUploadSize(Integer.parseInt(env.getProperty("multipart.maxFileSize")));
resolver.setMaxInMemorySize(Integer.parseInt(env.getProperty("multipart.maxinMemory")));
return resolver;
}



@Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor pool = new ThreadPoolTaskExecutor();
pool.setCorePoolSize(5);
pool.setMaxPoolSize(10);
pool.setQueueCapacity(200);
pool.setWaitForTasksToCompleteOnShutdown(true);
pool.setRejectedExecutionHandler(new RejectedExecutionHandler() {

@Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
LOG.debug("runnable " + r);
executor.execute(r);
}
});

return pool;
}
}


2. Make another class inside the same package and add the following code :

public class XXX extends AbstractAnnotationConfigDispatcherServletInitializer

public class xxx extends AbstractAnnotationConfigDispatcherServletInitializer  {



@Override
protected Class<?>[] getRootConfigClasses() {
return new Class[] { UtilityServiceConfiguration.class };
}

@Override
protected Class<?>[] getServletConfigClasses() {
return null;
}

@Override
protected String[] getServletMappings() {
return new String[] { "/" };
}

@Override
protected Filter[] getServletFilters() {

return new Filter[] {
      new CorsFilter()
    };
}
}

3. For adding  multiple dispatcher servlet use the below config :

public class AppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {

    @Override
    public void onStartup(ServletContext servletContext) throws ServletException {

        servletContext.addListener(MyAppContextLoaderListener.class);

        servletContext.setInitParameter("spring.profiles.active", "dev");
        servletContext.setInitParameter("contextClass", "org.springframework.web.context.support.AnnotationConfigWebApplicationContext");
        servletContext.setInitParameter("contextConfigLocation", "MyAppConfig");

        // dispatcher servlet for restEntryPoint
        AnnotationConfigWebApplicationContext restContext = new AnnotationConfigWebApplicationContext();
        restContext.register(MyRestConfig.class);
        ServletRegistration.Dynamic restEntryPoint = servletContext.addServlet("restEntryPoint", new DispatcherServlet(restContext));
        restEntryPoint.setLoadOnStartup(1);
        restEntryPoint.addMapping("/api/*");

        // dispatcher servlet for webSocketEntryPoint
        AnnotationConfigWebApplicationContext webSocketContext = new AnnotationConfigWebApplicationContext();
        webSocketContext.register(MyWebSocketWebConfig.class);
        ServletRegistration.Dynamic webSocketEntryPoint = servletContext.addServlet("webSocketEntryPoint", new DispatcherServlet(webSocketContext));
        webSocketEntryPoint.setLoadOnStartup(1);
        webSocketEntryPoint.addMapping("/ws/*");

        // dispatcher servlet for webEntryPoint
        AnnotationConfigWebApplicationContext webContext = new AnnotationConfigWebApplicationContext();
        webContext.register(MyWebConfig.class);
        ServletRegistration.Dynamic webEntryPoint = servletContext.addServlet("webEntryPoint", new DispatcherServlet(webContext));
        webEntryPoint.setLoadOnStartup(1);
        webEntryPoint.addMapping("/");

        FilterRegistration.Dynamic validationFilter = servletContext.addFilter("validationFilter", new MyValidationFilter());
        validationFilter.addMappingForUrlPatterns(null, false, "/*");

        FilterRegistration.Dynamic lastFilter = servletContext.addFilter("lastFilter", new MyLastFilter());
        lastFilter.addMappingForUrlPatterns(null, false, "/*");

    }

    @Override
    protected Class<?>[] getRootConfigClasses() {
        // return new Class<?>[] { AppConfig.class };
        return null;
    }

    @Override
    protected Class<?>[] getServletConfigClasses() {
        // TODO Auto-generated method stub
        return null;
    }

    @Override
    protected String[] getServletMappings() {
        // TODO Auto-generated method stub
        return null;
    }

}

How ConcurrentHashMap Works Internally in Java

ConcurrentHashMap as the name suggests allows concurrent read/writes to the Map. But there are limitations. ConcurrentHashMap maintains another data structure internally called segments. Each bucket of HashMap is part of one of the segments. Number of segments is called Concurrency-Level which determines number of thread that can write simultaneous. This Segments gets locked when writing/updating/removing data. Think of Segments as locks used to prevent concurrent write to same bucket of hashmap leading to inconsistency. So as long as write to concurrent hashmap is on different segments it can happen in parallel. Reads are completely lock free i.e No need to acquire lock for reading. Last updated value is returned.

 Concurrency-Level , Segment array and initialization :
First when you create a ConcurrentHashMap you can provide concurrency level. This determines size of Segment array. Size of segment array will always be equal or more than the concurrency level. If this is not provided default is used -
static final int MAX_SEGMENTS = 1 << 16; // slightly conservative
Note that the size of segment table will always be power of 2. So if you give  concurrency level as 10 then next best power of 2 match will be picked up i.e 16 and Segment array of size 16 will be created which implies 16 threads can simultaneously operate on the map.

2 ^ N >= concurrency level

static final class Segment<K,V> extends ReentrantLock implements Serializable {

    //The number of elements in this segment's region.
    transient volatile int count;
    //The per-segment table.
    transient volatile HashEntry<K,V>[] table;
}

Putting element in ConcurrentHashMap :

For putting element in Map we first need to determine which segment the element should be processed for. For this we first get hascode of the key. Next we do a rehash of the existing hash to ensure

     /**
     * Applies a supplemental hash function to a given hashCode, which
     * defends against poor quality hash functions.  This is critical
     * because ConcurrentHashMap uses power-of-two length hash tables,
     * that otherwise encounter collisions for hashCodes that do not
     * differ in lower or upper bits.
     */
    private static int hash(int h) {
        // Spread bits to regularize both segment and index locations,
        // using variant of single-word Wang/Jenkins hash.
        h += (h <<  15) ^ 0xffffcd7d;
        h ^= (h >>> 10);
        h += (h <<   3);
        h ^= (h >>>  6);
        h += (h <<   2) + (h << 14);
        return h ^ (h >>> 16);
    }
 Once hash is calculated you can get the segment which it belongs to and delegate put method to segments put method as follows -

    public V put(K key, V value) {
        if (value == null)
            throw new NullPointerException();
        int hash = hash(key.hashCode());
        return segmentFor(hash).put(key, hash, value, false);
    }

    final Segment<K,V> segmentFor(int hash) {
        return segments[(hash >>> segmentShift) & segmentMask];
    }


We will see how segment is computed in some time with a proper example. Once put is delegated to segment , segment will add it to the appropriate bucket in the segment.

        V put(K key, int hash, V value, boolean onlyIfAbsent) {
            lock();
            try {
                int c = count;
                if (c++ > threshold) // ensure capacity
                    rehash();
                HashEntry<K,V>[] tab = table;
                int index = hash & (tab.length - 1);
                HashEntry<K,V> first = tab[index];
                HashEntry<K,V> e = first;
                while (e != null && (e.hash != hash || !key.equals(e.key)))
                    e = e.next;

                V oldValue;
                if (e != null) {
                    oldValue = e.value;
                    if (!onlyIfAbsent)
                        e.value = value;
                }
                else {
                    oldValue = null;
                    ++modCount;
                    tab[index] = new HashEntry<K,V>(key, hash, first, value);
                    count = c; // write-volatile
                }
                return oldValue;
            } finally {
                unlock();
            }
        }


Now this is very interesting method. Lets understand whats happening here.

First call is to lock(). Since it is a write/update operation on a bucket of same segment we need a lock. If you recollect Segment class it extends ReentrantLock so each segment is a lock. So you can call lock() and unlock() directly in Segment class.
Next it's like a normal HashMap. You find the index of the Entry table where your elements hash falls and add it there as linked list.
You can see similar code as HashMap that updates value if key is same, inserts in array if there is no element in the table and adds it in the linked list of the table if element already exists.
Finally once operation is complete it calls unlock() so that other threads can continue update.
Note the lock is a blocking call.
You can also see call for rehash if threshold is reached. Like Entry array Segment also has a threshold and when it is reached Segment array is resized for performance. That's what rehash.
NOTE : For getting index of Segment table first n bits are used where as for getting index of Entry table last N bits are used from enhanced hash integer (See details in example below).

Getting element from  ConcurrentHashMap :

Get on ConcurrentHashMap is very simple no locks involved. You simply read the data and return -

        public V get(Object key) {
                int hash = hash(key.hashCode());
                return segmentFor(hash).get(key, hash);
        }

        V get(Object key, int hash) {
            if (count != 0) { // read-volatile
                HashEntry<K,V> e = getFirst(hash);
                while (e != null) {
                    if (e.hash == hash && key.equals(e.key)) {
                        V v = e.value;
                        if (v != null)
                            return v;
                        return readValueUnderLock(e); // recheck
                    }
                    e = e.next;
                }
            }
            return null;
        }


NOTE  : readValueUnderLock method is used as a backup in case a null (pre-initialized) value is ever seen in an unsynchronized access method.

Example
Above was just all code and some understanding. Now lets take an actual example.



Let's say we have created a ConcurrentHashMap with concurrency level lets say 10. Based on this Segment array will be created based on following code -

    private static void printSegmentDetails(int concurrencyLevel) {
        int sshift = 0;
        int segmentMask = 0;
        int segmentShift = 0;

        int ssize = 1;
        while (ssize < concurrencyLevel) {
            ++sshift;
            ssize <<= 1;
        }
        segmentShift = 32 - sshift;
        segmentMask = ssize - 1;
        System.out.println("Segment array size :" + ssize);
        System.out.println("segmentShift : " + segmentShift);
        System.out.println("segmentMask : " + segmentMask);
    }


Output for 10 concurrency level:
Segment array size : 16
segmentShift : 28
segmentMask : 15



NOTE  :As mentioned before segment array is of size 2^n such that 2^n >= concurrency level. In this case 2^4

Now that we have segment table in place lets simulate put. We need to put a String called "Aniket" as key. We don't care about value. Just make sure it's not null.

First we will calculate hascode of the key.
Then hash it so for better hash (as mentioned above)
Then based on the result hash we will find which segment will it belong
Remember of Segment table was >= 2^N we now want first N bits to determine which segment this hash falls into. Since N bits will vary from 1 - 2^N which is our segment array size. Also remember code to get this index from above? -
int segmentIndex = (hash >>> segmentShift) & segmentMask
This essentially means logically right shift hash with segmentShift bits. Since int is 32 bit and segmentShift = 32 - sshift, hash >>> segmentShift will essentially give you first sshift bits (sshift is nothing but N in 2^N we saw above). segmentMask is to get the N bits post shift.

So in this case,
N  = sshift =  4
2^N = 16 -> Size of segment array
segmentShift = 32 - 4 = 28 (as we saw in output above)
segmentMask = 16 -1 - 15


    public static void main(String args[]) {   
        String key = "Aniket";
        //hascode of key
        System.out.println(key.hashCode());
        //better hash
        System.out.println(hash(key.hashCode()));
        //better hash in binary
        System.out.println(Integer.toBinaryString(hash(key.hashCode())));
        //logical right shift by segmentShift
        System.out.println("Right shifter hash : " + Integer.toBinaryString(hash(key.hashCode()) >>> 28));
        // segment index as binary and of right shift and segmentMask
        System.out.println("Segment Index : " + Integer.toBinaryString((hash(key.hashCode()) >>> 28 ) & 15));
        // segment index as decimal
        System.out.println("Segment Index : " + ((hash(key.hashCode()) >>> 28 ) & 15));
    }


Output :
1965716254
1839402854
1101101101000110000111101100110
Right shifter hash : 110
Segment Index : 110
Segment Index : 6

NOTE : 1101101101000110000111101100110 is 31 bits as rightmost bit is 0 and ignored.  Same goes for all subsequent binmary bit formats.

So your element with key "Aniket" will go in Segment array of index 6. Inside segments it's pretty simple to calculate index of Entry array.

 int entryArrayindex = hash & (tab.length - 1);

         int entryArrayindex = (hash(key.hashCode()) & (16 - 1));
         System.out.println("Entry array index : " + entryArrayindex);
         System.out.println("Entry array index in binary : " + Integer.toBinaryString(entryArrayindex));


Output :
Entry array index : 6
Entry array index in binary : 110

So finally Entry is inserted at index 6 of Entry table.

So to summarize for getting index of Segment table first n bits are used where as for getting index of Entry table last N bits are used from enhanced hash integer.



Ref : http://opensourceforgeeks.blogspot.com/2017/05/how-concurrenthashmap-works-internally.html

PS: I would like to mention here that this article is valid prior to Java 8. In java 8 they have replaced this segment thing with compareAndSwapInt algorithm. Hopefully i will write about it in my future articles. Otherwise you can see the code of ConcurrentHashMap for understanding that.

Thursday, November 7, 2019

Binary Represenattion

The code follows:

#include<stdio.h>

int convertIntoBinary(int n)
{
    if(n>1)
        convertIntoBinary(n/2);

    printf("%d",n%2);
}

int main()
{
    convertIntoBinary(50);
    return 0;
}


Viva La Raza
Sid

Wednesday, November 6, 2019

Common Database Interview Questions

SQL query to delete duplicate rows














Delete duplicate rows using Common Table Expression(CTE)

   WITH CTE AS
(
SELECT *,ROW_NUMBER() OVER (PARTITION BY col1,col2,col3 ORDER BY col1,col2,col3) AS RN
FROM MyTable
)

DELETE FROM CTE WHERE RN<>1
 

Ref: http://www.besttechtools.com/articles/article/sql-query-to-delete-duplicate-rows

strongly recommend to follow this article ::http://codaffection.com/sql-server-article/delete-duplicate-rows-in-sql-server/

nth highest salary in mysql

Solution to finding the 2nd highest salary in SQL
Now, here is what the SQL will look like:

SELECT MAX(Salary) FROM Employee
WHERE Salary NOT IN (SELECT MAX(Salary) FROM Employee )

SELECT * /*This is the outer query part */
FROM Employee Emp1
WHERE (N-1) = ( /* Subquery starts here */
SELECT COUNT(DISTINCT(Emp2.Salary))
FROM Employee Emp2
WHERE Emp2.Salary > Emp1.Salary)

Ref : https://www.programmerinterview.com/database-sql/find-nth-highest-salary-sql/


Display Nth Record from Employee table

SELECT * FROM (
  SELECT
    ROW_NUMBER() OVER (ORDER BY key ASC) AS rownumber,
    columns
  FROM tablename
) AS foo
WHERE rownumber <= n

OR

SELECT
    *
FROM
    (
        SELECT
            ROW_NUMBER () OVER (ORDER BY MyColumnToOrderBy) AS RowNum,
            *
        FROM
            Table_1
    ) sub
WHERE
    RowNum = 23


ROW_NUMBER is an analytic function. It assigns a unique number to each row to which it is applied (either each row in the partition or each row returned by the query), in the ordered sequence of rows specified in the order_by_clause, beginning with 1.

SELECT department_id, last_name, employee_id, ROW_NUMBER()
   OVER (PARTITION BY department_id ORDER BY employee_id) AS emp_id
   FROM employees;

DEPARTMENT_ID LAST_NAME                 EMPLOYEE_ID     EMP_ID
------------- ------------------------- ----------- ----------
           10 Whalen                            200          1
           20 Hartstein                         201          1
           20 Fay                               202          2
           30 Raphaely                          114          1
           30 Khoo                              115          2
           30 Baida                             116          3
           30 Tobias                            117          4
           30 Himuro                            118          5
           30 Colmenares                        119          6
           40 Mavris                            203          1
. . .
          100 Popp                              113          6
          110 Higgins                           205          1
          110 Gietz                             206          2

The following inner-N query selects all rows from the employees table but returns only the fifty-first through one-hundredth row:

SELECT last_name FROM
   (SELECT last_name, ROW_NUMBER() OVER (ORDER BY last_name) R FROM employees)
   WHERE R BETWEEN 51 and 100;

Ref : https://stackoverflow.com/questions/16568/how-to-select-the-nth-row-in-a-sql-database-table#42765
https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions137.htm


Display first 5 Records from Employee table

will work with above query with rownum<6

Find duplicate rows in table.

option 1 :

SELECT
    a,
    b,
    COUNT(*) occurrences
FROM t1
GROUP BY
    a,
    b
HAVING

    COUNT(*) > 1;

option 2 :

Using common table expression  (Read this fir CTE https://www.sqlservertutorial.net/sql-server-basics/sql-server-cte/)

WITH cte AS (
    SELECT
        a,
        b,
        COUNT(*) occurrences
    FROM t1
    GROUP BY
        a,
        b
    HAVING
        COUNT(*) > 1
)
SELECT
    t1.id,
    t1.a,
    t1.b
FROM t1
    INNER JOIN cte ON
        cte.a = t1.a AND
        cte.b = t1.b
ORDER BY
    t1.a,
    t1.b;

Option 3 :

Using ROW_NUMBER

WITH cte AS (
    SELECT
        col,
        ROW_NUMBER() OVER (
            PARTITION BY col
            ORDER BY col) row_num
    FROM
        t1
)
SELECT * FROM cte
WHERE row_num > 1;

Ref : https://www.sqlservertutorial.net/sql-server-basics/sql-server-find-duplicates/


How to find Nth highest salary from a table

DENSE_RANK
DENSE_RANK computes the rank of a row in an ordered group of rows and returns the rank as a NUMBER.

https://www.sqlservertutorial.net/sql-server-window-functions/sql-server-dense_rank-function/



Option 1 :
Using DENSE_RANK()

select * from(
select ename, sal, dense_rank()
over(order by sal desc)r from Employee)
where r=&n;

https://www.geeksforgeeks.org/find-nth-highest-salary-table/



SELECT * FROM ( SELECT e.*, ROW_NUMBER() OVER (ORDER BY salary DESC) rn FROM Employee e ) WHERE rn = N; /*N is the nth highest salary*/

Read more: https://javarevisited.blogspot.com/2016/01/4-ways-to-find-nth-highest-salary-in.html#ixzz6A4ttYeVW

How TreeMap Works In Java

What is a Tree Map ?

Treemap class is like HashMap which stores key- value pairs . The major difference is that Treemap  sorts
the key in ascending order.

According to Java doc  :

Treemap is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used.
This implementation provides guaranteed log(n) time cost for the containsKey, get, put and remove operations. Algorithms are adaptations of those in Cormen, Leiserson, and Rivest's Introduction to Algorithms.


How TreeMap works in java ?


TreeMap is a Red-Black tree based NavigableMap implementation.In other words , it sorts the TreeMap object keys using Red-Black tree algorithm.


So we learned that TreeMap uses Red Black tree algorithm internally to sort the elements.


Red Black algorithm is a complex algorithm . We should read the pseudo code of Red Black algorithm in order to understand the internal implementation .

Red Black tree has the following properties :

1. As the name of the algorithm suggests ,color of every node in the tree is either red or black.

2. Root node must be Black in color.

3. Red node can not have a red color neighbor node.

4. All paths from root node to the null should consist the same number of black nodes .


Rotation in Red Black Tree :


how treemap works in java






Rotations maintains the inorder ordering of the keys(x,y,z).
A rotation can be maintained in O(1) time.

You can find more about the red black tree algorithm here

Interviewer : Why and when we use TreeMap ?

We need TreeMap  to get the sorted list of keys in ascending order.

Interviewer : What is the runtime performance of the get() method in TreeMap and HashMap  ,where n represents the number of elements ?

According to TreeMap Java doc,

TreeMap implementation provides guaranteed log(n) time cost for the containsKey,get,put and remove operations.

According to HashMap Java doc :

HashMap implementation provides constant-time performance for the basic operations (get and put), assuming the hash function disperses the elements properly among the buckets.

One liner : TreeMap : log(n)   HashMap : Constant time performance assuming elements disperses properly

Interviewer : What is "natural ordering" in TreeMap ?

"Natural" ordering is the ordering implied by the implementation of the Comparable interface by the objects used as keys in the TreeMap. Essentially, RBTree must be able to tell which key is smaller than the other key, and there are two ways to supply that logic to the RBTree implementation:

1.Implement Comparable interface in the class(es) used as keys to TreeMap, or
2.Supply an implementation of the Comparator that would do comparing outside the key class itself.


Natural ordering is the order provided by the Comparable interface .If somebody puts the key  that do not implement natural order then it will throw ClassCastException.


Interviewer : Why do we need TreeMap when we have sortedMap ?

sortedMap is a interface and TreeMap is the class implementing it .As we know one can not create objects of the interface . Interface tells us which methods a sortedMap implementation should provide .TreeMap is such an implementation.

Interviewer : Which data structure you will prefer in your code : HashMap or TreeMap ?

HashMap is faster while  TreeMap is sorted .Thus we choose them according to their advantage.

If you do not want to sort the elements but just to insert and retrieve the elements then use HashMap .

But if you want to maintain the  order of the elements then TreeMap should be preferred because the result is alphabetically sorted .While iterating HashMap there is no ordering of the elements ,on the other hand , TreeMap iterates in the natural key order.

Interviewer : What happens if the TreeMap is concurrently modified while iterating the elements ?

The iterator fails fast and quickly if structurally modified at any time after the iterator is created (in any way except through the iterator's own remove method ). We already discussed the difference between Fail-fast and Fail safe iterators .

Interviewer : Which copy technique (deep or shallow ) is used by the TreeMap clone() method ?

According to docjar , clone() method returns the shallow copy of the TreeMap instance . In shallow copy object B points to object A location in memory . In other words , both object A and B are sharing the same elements .The keys and values  themselves are not cloned .

Interviewer : Why  java's  treemap does not allow an initial size ?

HashMap reallocates its internals as the new one gets inserted while TreeMap does not reallocate nodes on adding new ones. Thus , the size of the TreeMap  dynamically increases if needed , without shuffling the internals. So it is meaningless to set the initial size of the TreeMap


Reference : https://javahungry.blogspot.com/2014/06/how-treemap-works-ten-treemap-java-interview-questions.html

Monday, November 4, 2019

Data Structure Interview Question - Stack

1.  Implement a stack using array.

Stack is abstract data type which demonstrates Last in first out (LIFO) behavior. We will implement same behavior using Array.
Although java provides implementation for all abstract data types such as Stack,Queue and LinkedList but it is always good idea to understand basic data structures and implement them yourself.
Please note that Array implementation of Stack is not dynamic in nature. You can implement Stack through linked list for dynamic behavior.

package com.sid.datastructure.stack;

public class ImplementStackUsingArray {

int arr[];
int top = -1;
int size;

public ImplementStackUsingArray(int size) {
this.arr = new int[size];
this.top = -1;
this.size = size;
}

// push
// popo
// peek

private void push(int pushedElement) {
if (!isFull()) {
arr[++top] = pushedElement;
System.out.println("Element pushed " + pushedElement);
} else {
System.out.println("Stack is full");
}

}

private int pop() {
if (!isEmpty()) {
System.out.println(arr[top]);
return arr[top--];
} else {
System.out.println("Stack is empty");
return -1;
}

}

private int peek() {
if (!isEmpty()) {
return arr[top];
} else {
System.out.println("Stack is empty. Cannot peek");
return -1;
}

}

private boolean isEmpty() {
return top == -1;
}

private boolean isFull() {
return (top == size - 1);
}

public static void main(String[] args) {
ImplementStackUsingArray stack = new ImplementStackUsingArray(10);
stack.pop();
System.out.println("=================");
stack.push(10);
stack.push(30);
stack.push(50);
stack.push(40);
System.out.println("=================");
stack.pop();
stack.pop();
stack.pop();
System.out.println("=================");
}

}

2. Implement a stack using two queues .

For more questions , visit the following link :

https://java2blog.com/data-structure-and-algorithm-interview-questions-in-java/

Sunday, November 3, 2019

What's New in Spring Framework 5?


Spring Framework 5.0 is the first major release of the Spring Framework since version 4 was released in December of 2013. Juergen Hoeller, Spring Framework project lead announced the release of the first Spring Framework 5.0 milestone (5.0 M1) on 28 July 2016.

Now, a year later, we are looking forward to Release Candidate 3 (RC3) to be released on July 18th, 2017. This is expected to be the final release on the roadmap to the first GA (General Availability) release of Spring Framework 5.0.

I’m excited about the new features and enhancements in Spring Framework 5.0.

At a high level, features of Spring Framework 5.0 can be categorized into:

JDK baseline update

Core framework revision

Core container updates

Functional programming with Kotlin

Reactive Programming Model

Testing improvements

Library support

Discontinued support

JDK Baseline Update for Spring Framework 5.0
The entire Spring framework 5.0 codebase runs on Java 8. Therefore, Java 8 is the minimum requirement to work on Spring Framework 5.0.

This is actually very significant for the framework. While as developers, we’ve been able to enjoy all the new features found in modern Java releases. The framework itself was carrying a lot of baggage in supporting deprecated Java releases.

The framework now requires a minimum of Java 8.

Originally, Spring Framework 5.0 was expected to release on Java 9. However, with the Java 9 release running 18 months + behind, the Spring team decided to decouple the Spring Framework 5.0 release from Java 9.

However, when Java 9 is released (expected in September of 2017), Spring Framework 5.0 will be ready.

Core Framework Revision
The core Spring Framework 5.0 has been revised to utilize the new features introduced in Java 8. The key ones are:

Based on Java 8 reflection enhancements, method parameters in Spring Framework 5.0 can be efficiently accessed.

Core Spring interfaces now provide selective declarations built on Java 8 default methods.

@Nullable and @NotNull annotations to explicitly mark nullable arguments and return values. This enables dealing null values at compile time rather than throwing NullPointerExceptions at runtime.

On the logging front, Spring Framework 5.0 comes out of the box with Commons Logging bridge module, named spring-jcl instead of the standard Commons Logging. Also, this new version will auto detect Log4j 2.x, SLF4J, JUL ( java.util.logging) without any extra bridges.

Defensive programming also gets a thrust with the Resource abstraction providing the isFile indicator for thegetFile method.

Core Container Updates
Spring Framework 5.0 now supports candidate component index as an alternative to classpath scanning. This support has been added to shortcut the candidate component identification step in the classpath scanner.

An application build task can define its own META-INF/spring.components file for the current project. At compilation time, the source model is introspected and JPA entities and Spring Components are flagged.

Reading entities from the index rather than scanning the classpath does not have significant differences for small projects with less than 200 classes. However, it has significant impacts on large projects.

Loading the component index is cheap. Therefore the startup time with the index remains constant as the number of classes increase. While for a compoent scan the startup time increases significantly.

What this means for us developers on large Spring projects, the startup time for our applications will be reduced significantly. While 20 or 30 seconds does not seem like much, when you’re waiting for that dozens or hundreds of times a day, it adds up. Using the component index will help with your daily productivity.

You can find more information on the component index feature on Spring’s Jira.

Now  @Nullable annotations can also be used as indicators for optional injection points. Using  @Nullableimposes an obligation on the consumers that they must prepare for a value to be null. Prior to this release, the only way to accomplish this is through Android’s Nullable, Checker Framework’s Nullable, and JSR 305’s Nullable.

Some other new and enhanced features from the release note are:

Implementation of functional programming style in GenericApplicationContext andAnnotationConfigApplicationContext

Consistent detection of transaction, caching, async annotations on interface methods.

XML configuration namespaces streamlined towards unversioned schemas.

Functional Programming with Kotlin
Spring Framework 5.0 introduces support for JetBrains Kotlin language. Kotlin is an object-oriented language supporting functional programming style.

Kotlin runs on top of the JVM, but not limited to it. With Kotlin support, developers can dive into functional Spring programming, in particular for functional Web endpoints and bean registration.

In Spring Framework 5.0, you can write clean and idiomatic Kotlin code for Web functional API, like this.

{
("/movie" and accept(TEXT_HTML)).nest {
GET("/", movieHandler::findAllView)
GET("/{card}", movieHandler::findOneView)
}
("/api/movie" and accept(APPLICATION_JSON)).nest {
GET("/", movieApiHandler::findAll)
GET("/{id}", movieApiHandler::findOne)
}
}
For bean registration, as an alternative to XML or @Configuration and @Bean, you can now use Kotlin to register your Spring Beans, like this:

val context = GenericApplicationContext {
registerBean()
registerBean { Cinema(it.getBean()) }
}
Reactive Programming Model
An exciting feature in this Spring release is the new reactive stack Web framework.

Being fully reactive and non-blocking, this Spring Framework 5.0 is suitable for event-loop style processing that can scale with a small number of threads.

Reactive Streams is an API specification developed by engineers from Netflix, Pivotal, Typesafe, Red Hat, Oracle, Twitter, and Spray.io. This provides a common API for reactive programming implementations to implement. Much like JPA for Hibernate. Where JPA is the API, and Hibernate is the implementation.

The Reactive Streams API is officially part of Java 9. In Java 8, you will need to include a dependency for the Reactive Streams API specification.

Streaming support in Spring Framework 5.0 is built upon Project Reactor, which implements the Reactive Streams API specification.

Spring Framework 5.0 has a new  spring-webflux module that supports reactive HTTP and WebSocket clients. Spring Framework 5.0 also provides support for reactive web applications running on servers which includes REST, HTML, and WebSocket style interactions.

I have a detailed post about Reactive Streams here.

There are two distinct programming models on the server-side in spring-webflux:

Annotation-based with @Controller and the other annotations of Spring MVC

Functional style routing and handling with Java 8 lambda

With Spring Webflux, you can now create WebClient, which is reactive and non-blocking as an alternative toRestTemplate.

A WebClient implementation of a REST endpoint in Spring 5.0 is this.

WebClient webClient = WebClient.create();
Mono person = webClient.get()
.uri("http://localhost:8080/movie/42")
.accept(MediaType.APPLICATION_JSON)
.exchange()
.then(response -> response.bodyToMono(Movie.class));
While the new WebFlux module brings us some exciting new capabilities, traditional Spring MVC is still fully supported in Spring Framework 5.0.

Testing Improvements
Spring Framework 5.0 fully supports Junit 5 Jupiter to write tests and extensions in JUnit 5. In addition to providing a programming and extension model, the Jupiter sub-project provides a test engine to run Jupiter based tests on Spring.

In addition, Spring Framework 5 provides support for parallel test execution in Spring TestContext Framework. For the reactive programming model, spring-test now includes WebTestClient for integrating testing support for Spring WebFlux. The new WebTestClient, similar to MockMvc does not need a running server. Using a mock request and response, WebTestClient can bind directly to the WebFlux server infrastructure.

For a complete list enhancements in the existing TestContext framework, you can refer here.

Of course, Spring Framework 5.0 still supports our old friend JUnit 4 as well! At the time of writing, JUnit 5 is just about to go GA. Support for JUnit 4 is going to be with the Spring Framework for some time into the future.

Library Support
Spring Framework 5.0 now supports the following upgraded library versions:

Jackson 2.6+

EhCache 2.10+ / 3.0 GA

Hibernate 5.0+

JDBC 4.0+

XmlUnit 2.x+

OkHttp 3.x+

Netty 4.1+

Discontinued Support
At the API level, Spring Framework 5.0 has discontinued support for the following packages:

beans.factory.access

jdbc.support.nativejdbc

mock.staticmock of the spring-aspects module.

web.view.tiles2M. Now Tiles 3 is the minimum requirement.

orm.hibernate3 and orm.hibernate4. Now, Hibernate 5 is the supported framework.

Spring Framework 5.0 has also discontinued support for the following libraries:

Portlet

Velocity

JasperReports

XMLBeans

JDO

Guava

If you are using any of the preceding packages, it is recommended to stay on Spring Framework 4.3.x.

Summary
The highlight of the Spring Framework 5.0 is definitely reactive programming, which is a significant paradigm shift.

You can look at Spring Framework 5.0 as a cornerstone release for reactive programs. For the remainder of 2017 and beyond, you can expect to see child projects implement reactive features. You will see reactive programming features added to upcoming releases of Spring Data, Spring Security, Spring Integration and more.

The Spring Data team has already implemented reactive support for MongoDB and Redis.

It’s still too early to get reactive support with JDBC. The JDBC specification itself is blocking. So, its going to be some time before we see reactive programming with traditional JDBC databases.

While Reactive Programming is the shiny new toy inside of Spring Framework 5.0, it’s not going to be supported everywhere. The downstream technologies need to provide reactive support.

With the growing popularity of Reactive Programming, we can expect to see more and more technologies implement Reactive solutions. The reactive landscape is rapidly evolving. Spring Framework 5 uses Reactor, which is Reactive Streams compliant implementation. You can read more about the Reactive Streams specification here.