The most complete JAVA interview essay on the whole network

1. What are the three elements of concurrent programming?

(1) [Atomicity] [Atomicity]

Atomicity refers to one or more operations that either all execute and are not interrupted by other operations in the process, or none of them execute.

(2) Visibility

Visibility means that when multiple threads operate on a shared variable, after one thread modifies the variable, other threads can immediately see the result of the modification.

(3) Orderly

Ordered, that is, the execution order of the program is executed according to the order of the code.

2. What are the ways to achieve visibility?

Synchronized or Lock: Ensure that only one thread acquires the lock and executes the code at the same time, and refreshes the latest value to the main memory before the lock is released to achieve visibility.

3. The value of multithreading?

(1) Take advantage of multi-core CPUs

Multi-threading can really take advantage of the multi-core CPU to achieve the purpose of making full use of the CPU, and use multi-threading to complete several things at the same time without interfering with each other.

(2) Prevent blocking

From the perspective of program running efficiency, a single-core CPU will not take advantage of multi-threading, but will reduce the overall program efficiency due to thread context switching caused by running multi-threading on a single-core CPU. But for a single-core CPU, we still have to apply multi-threading, just to prevent blocking. Just imagine, if a single-core CPU uses a single thread, then as long as this thread is blocked, for example, a certain data is read remotely, and the peer has not returned and has not set a timeout, then your entire program will be executed before the data is returned. stopped running. Multithreading can prevent this problem. Multiple threads run at the same time. Even if the code execution of one thread blocks reading data, it will not affect the execution of other tasks.

(3) Easy to model

This is another less obvious advantage. Suppose there is a large task A, single-threaded programming, then there are many considerations, and it is more troublesome to establish the entire program model. However, if this large task A is decomposed into several small tasks, task B, task C, and task D, and the program models are established respectively, and these tasks are run separately through multiple threads, it will be much simpler.

4. What are the ways to create threads?

(1) Inherit the Thread class to create a thread class

(2) Create a thread class through the Runnable interface

(3) Create threads through Callable and Future

(4) Created by thread pool

5. Comparison of three ways to create threads?

(1) Create multiple threads by implementing Runnable and Callable interfaces. The advantages are:

The thread class just implements the Runnable interface or the Callable interface, and can also inherit from other classes. In this way, multiple threads can share the same target object, so it is very suitable for multiple same threads to process the same resource, so that the CPU, code and data can be separated to form a clear model, which can better reflect object-oriented thinking.

The disadvantages are:

The programming is slightly more complicated, if you want to access the current thread, you have to use the Thread.currentThread() method.

(2) The advantages of creating multi-threading by inheriting the Thread class are:

Writing is simple, if you need to access the current thread, you don’t need to use the Thread.currentThread() method, just use this directly

to get the current thread. The disadvantages are:

The thread class has inherited the Thread class, so it cannot inherit other parent classes.

(3) The difference between Runnable and Callable

  1. The method specified (overridden) by Callable is call(), and the method specified (overridden) by Runnable is run(). 2. Callable tasks can return values ​​after execution, while Runnable tasks cannot return values.

  2. The Call method can throw exceptions, but the run method cannot.

  3. Running the Callable task can get a Future object, which represents the result of the asynchronous calculation. It provides methods to check whether a computation is complete, wait for the computation to complete, and retrieve the result of the computation. Through the Future object, you can learn about the execution of the task, cancel the execution of the task, and obtain the execution result.

6. The state flow diagram of the thread

The life cycle of a thread and its five basic states:

7. Java threads have five basic states

(1) New state (New):

When the thread object pair is created, it enters the new state, such as: Thread t= new MyThread();

(2) Ready state (Runnable):

When the start() method (t.start();) of the thread object is called, the thread enters the ready state. A thread in the ready state just means that the thread is ready and waiting for the CPU to schedule execution at any time, it does not mean that the thread will be executed immediately after t.start() is executed;

(3) Running state (Running):

When the CPU starts to schedule the thread in the ready state, the thread can actually execute at this time, that is, it enters the running state. Note: The ready state is the only entry to the running state, that is to say, if a thread wants to enter the running state for execution, it must first be in the ready state;

(4) Blocked state (Blocked):

For some reason, the thread in the running state temporarily gives up the right to use the CPU and stops execution. At this time, it enters the block.

The block state, until it enters the ready state, has no chance to be called by the CPU again to enter the running state. According to the different reasons for blocking, blocking states can be divided into three types:

1) Waiting for blocking: The thread in the running state executes the wait() method to make the thread enter the waiting-blocking state;

2) Synchronization blocking: If the thread fails to acquire the synchronized synchronization lock (because the lock is occupied by other threads), it will enter the synchronization blocking state;

3) Other blocking: The thread will enter the blocking state by calling the thread’s sleep() or join() or issuing an I/O request. When the sleep() state times out, join() waits for the thread to terminate or times out, or when the I/O processing is complete, the thread re-enters the ready state.

(5) Dead state (Dead):

When the thread finishes executing or exits the run() method due to an exception, the thread ends its life cycle.

8. What is a thread pool? What kinds of creation methods are there?

The thread pool is to create several threads in advance. If there are tasks to be processed, the threads in the thread pool will process the tasks. After processing, the threads will not be destroyed, but will wait for the next task. Since creating and destroying threads consumes system resources, you can consider using thread pools to improve system performance when you want to create and destroy threads frequently.

Java provides an implementation of the java.util.concurrent.Executor interface for creating thread pools.

9. Creation of four thread pools:

(1) newCachedThreadPool creates a cacheable thread pool

(2) newFixedThreadPool creates a fixed-length thread pool, which can control the maximum number of [concurrent] newFixedThreadPool creates a fixed-length thread pool, which can control the maximum number of [concurrent] threads .

(3) newScheduledThreadPool creates a fixed-length thread pool that supports timing and periodic task execution.

(4) newSingleThreadExecutor creates a single-threaded thread pool that only uses a single worker thread to execute tasks.

10. What are the advantages of thread pools?

(1) Reuse existing threads to reduce the overhead of object creation and destruction.

(2) It can effectively control the maximum number of concurrent threads, improve the utilization rate of system resources, and avoid excessive resource competition.

Blockage free.

(3) Provide functions such as timed execution, regular execution, single thread, and concurrent number control.

11. What are the commonly used concurrency tool classes?

(1) CountDownLatch

(2) CyclicBarrier

(3) Semaphore

(4) Exchanger

12. The difference between CyclicBarrier and CountDownLatch

(1) CountDownLatch is simply a thread waiting, and the current thread can continue to execute until the other threads it is waiting for are completed and the countDown() method is called to issue a notification.

(2) cyclicBarrier means that all threads wait until all threads are ready to enter the await() method, and all threads start executing at the same time!

(3) The counter of CountDownLatch can only be used once. The CyclicBarrier’s counter can be reset using the reset() method. So CyclicBarrier can handle more complex business scenarios, such as if a calculation error occurs, the counter can be reset and the threads can be re-executed.

(4) CyclicBarrier also provides other useful methods, such as getNumberWaiting method to obtain the number of threads blocked by CyclicBarrier. The isBroken method is used to know if a blocked thread has been interrupted. Returns true if interrupted, false otherwise.

13. What is the role of synchronized?

In Java, the synchronized keyword is used to control thread synchronization, that is, in a multi-threaded environment, the synchronized code segment is controlled not to be executed by multiple threads at the same time. Synchronized can be added to a piece of code or to a method.

14. The role of the volatile keyword

For visibility, Java provides the volatile keyword to guarantee visibility. When a shared variable is modified by volatile, it will ensure that the modified value will be updated to main memory immediately, and when other threads need to read it, it will go to the memory to read

new value. From a practical point of view, an important role of volatile is to combine with CAS to ensure atomicity. For details, please refer to the classes under the java.util.concurrent.atomic package, such as AtomicInteger.

15. What is CAS

CAS is short for compare and swap, which is what we call a comparison exchange.

cas is a lock-based operation, and optimistic locking. Locks in java are divided into optimistic locks and pessimistic locks. Pessimistic locking locks a resource, and the next thread can only access it after a thread that previously acquired the lock releases the lock. Optimistic locking adopts a broad attitude, and handles resources without locking in some way, such as adding version to records to obtain data, and the performance is greatly improved compared with pessimistic locking.

A CAS operation consists of three operands – the memory location (V), the expected old value (A), and the new value (B). If the value in the memory address is the same as the value of A, then update the value in the memory to B. CAS obtains data through an infinite loop. If the value in the address obtained by thread a is modified by thread b in the first round of loop, then thread a needs to spin, and it will be possible to execute in the next loop.

Most of the classes under the java.util.concurrent.atomic package are implemented using CAS operations

(AtomicInteger,AtomicBoolean,AtomicLong)。

16. Problems with CAS

(1) CAS is likely to cause ABA problems

A thread a changes the value to b, and then to a. At this time, CAS thinks that there is no change, but it has changed. The solution to this problem can be identified by the version number, and the version is incremented by 1 for each operation. . In java5, AtomicStampedReference has been provided to solve the problem.

(2) The atomicity of code blocks cannot be guaranteed

The knowledge guaranteed by the CAS mechanism is the atomic operation of a variable, but not the atomicity of the entire block of code. For example, if you need to ensure that three variables are updated atomically together, you have to use synchronized.

(3) CAS causes CPU utilization to increase

I said before that CAS is a cyclic judgment process. If the thread has not obtained the state, the cpu resources will always be occupied.

17. What is Future?

In concurrent programming, we often use the non-blocking model. In the previous three implementations of multithreading, whether it is inheriting the thread class or implementing the runnable interface, there is no guarantee that the previous execution result can be obtained. By implementing the Callback interface and using Future, you can receive the execution results of multiple threads.

Future represents the result of an asynchronous task that may not be completed, and a Callback can be added to this result to perform corresponding operations after the task succeeds or fails.

18. What is AQS

AQS is the abbreviation of AbustactQueuedSynchronizer, which is a Java-improved underlying synchronization tool class, which uses an int type variable to represent the synchronization state, and provides a series of CAS operations to manage the synchronization state.

AQS is a framework for building locks and synchronizers. Using AQS can easily and efficiently construct a large number of synchronizers that are widely used, such as the ReentrantLock, Semaphore we mentioned, and others such as ReentrantReadWriteLock, SynchronousQueue, FutureTask, etc. is based on AQS.

19. AQS supports two synchronization methods:

(1) Exclusive formula

(2) Shared

This is convenient for users to implement different types of synchronization components, such as exclusive type such as ReentrantLock, shared type such as Semaphore, CountDownLatch, and combined type such as ReentrantReadWriteLock. In short, AQS provides the underlying support for use, and users can freely play how to assemble and implement.

20. What is ReadWriteLock

First of all, let’s be clear, it’s not that ReentrantLock is bad, it’s just that ReentrantLock sometimes has limitations. If ReentrantLock is used, it may be to prevent data inconsistency caused by thread A writing data and thread B reading data, but in this way, if thread C is reading data and thread D is also reading data, reading data will not change the data, there is no need Locked, but still locked, reducing the performance of the program. Because of this, the read-write lock ReadWriteLock was born. ReadWriteLock is a read-write lock interface, ReentrantReadWriteLock is a concrete implementation of the ReadWriteLock interface, which realizes the separation of read-write, read lock is shared, write lock is exclusive, read and read will not be mutually exclusive, read and write, Write and read, and write and write are mutually exclusive, which improves the performance of reading and writing.

21. What is FutureTask

In fact, as mentioned earlier, FutureTask represents an asynchronous operation task. A concrete implementation class of Callable can be passed in FutureTask, which can wait for the result of the asynchronous operation task, judge whether it has been completed, cancel the task and other operations. Of course, since FutureTask is also an implementation class of Runnable interface, FutureTask can also be put into the thread pool.

22. The difference between synchronized and ReentrantLock

Synchronized is the same keyword as if, else, for, while, and ReentrantLock is a class, which is the essential difference between the two. Since ReentrantLock is a class, it provides more and more flexible features than synchronized. It can be inherited, can have methods, and can have various class variables. The extensibility of ReentrantLock compared to synchronized is reflected in several points:

(1) ReentrantLock can set the waiting time for acquiring locks, thus avoiding deadlocks

(2) ReentrantLock can obtain information about various locks

(3) ReentrantLock can flexibly implement multiple notifications

In addition, the locking mechanism of the two is actually different. The bottom layer of ReentrantLock calls Unsafe’s park method to lock, and the synchronized operation should be the mark word in the object header, which I am not sure about.

23. What is optimistic locking and pessimistic locking

(1) Optimistic locking:

Just like its name, optimistic about thread safety issues arising from operations between concurrent operations, optimistic locking believes that races will not always occur, so it does not need to hold the lock, compare-replace these two actions as an atomic operation Try to modify the variable in memory, if it fails, it means a conflict, then there should be corresponding retry logic.

(2) Pessimistic lock:

Still like its name, it is pessimistic about the thread safety issues caused by concurrent operations. Pessimistic locks believe that competition will always occur, so every time a resource is operated, it will hold an exclusive lock, just like synchronized , regardless of the 3721, the resources can be manipulated directly after locking.

24. How does thread B know that thread A has modified the variable

(1) volatile modified variable

(2) synchronized Modifies the method of modifying variables

(3) wait/notify

(4) while polling

25, synchronized, volatile, CAS comparison

(1) synchronized is a pessimistic lock, which is preemptive and will cause other threads to block.

(2) volatile provides multi-threaded shared variable visibility and prohibits instruction reordering optimization.

(3) CAS is an optimistic lock based on conflict detection (non-blocking)

26. What is the difference between the sleep method and the wait method?

This question is often asked. Both the sleep method and the wait method can be used to give up the CPU for a certain period of time. The difference is that if the thread holds the monitor of an object, the sleep method will not give up the monitor of this object, and the wait method will give up the monitor. object’s monitor

27. What is ThreadLocal? What is the use?

ThreadLocal is a local thread copy variable utility class. It is mainly used to map the private thread and the copy object stored by the thread. The variables between each thread do not interfere with each other. In high concurrency scenarios, stateless calls can be implemented, especially for variable values ​​that each thread depends on. The scene to complete the action. To put it simply, ThreadLocal is a way of exchanging space for time. A ThreadLocal.ThreadLocalMap implemented by the open address method is maintained in each Thread, which isolates data and does not share data, so naturally there is no thread safety problem.

28. Why wait() method and notify()/notifyAll() method should be called in synchronized block

This is enforced by the JDK, the wait() method and the notify()/notifyAll() method must obtain the lock of the object before calling

29. What methods are there for multi-thread synchronization?

Synchronized keyword, Lock lock implementation, distributed lock, etc.

30. Thread scheduling strategy

The thread scheduler chooses the thread with the highest priority to run, however, the execution of the thread is terminated if:

(1) The yield method is called in the thread body to give up the right to occupy the CPU

(2) The sleep method is called in the thread body to make the thread go to sleep

(3) The thread is blocked due to IO operation

(4) Another higher priority thread appears

(5) In systems that support time slices, the thread’s time slice runs out

31. What is the concurrency of ConcurrentHashMap?

The concurrency of ConcurrentHashMap is the size of the segment, which is 16 by default, which means that at most there can be

16 threads operate ConcurrentHashMap, which is also the biggest advantage of ConcurrentHashMap over Hashtable. In any case, can Hashtable have two threads to obtain the data in Hashtable at the same time?

32. How to find which thread uses the CPU the longest in the Linux environment

(1) Get the pid of the project, jps or ps -ef | grep java

(2) top -H -p pid, the order cannot be changed

33. Java deadlock and how to avoid it?

Deadlock in Java is a programming situation in which two or more threads are permanently blocked and a Java deadlock situation occurs with at least two threads and two or more resources.

The root cause of deadlock in Java is: a cross closed loop application occurs when applying for a lock.

34. Causes of deadlocks

(1) It is that multiple threads are involved in multiple locks, and these locks are intersected, so it may lead to a closed loop of lock dependence. For example, when a thread acquires lock A and does not release it, it applies for lock B. At this time, another thread has acquired lock B and must acquire lock A before releasing lock B, so a closed loop occurs and a deadlock loop occurs. .

(2) The default lock application operation is blocking.

Therefore, to avoid deadlock, it is necessary to carefully examine all the methods in the classes of these objects when encountering a situation where the locks of multiple objects are crossed, whether there is a possibility of loops that lead to lock dependencies. In short, try to avoid calling delayed methods and synchronized methods of other objects in a synchronized method.

35. How to wake up a blocked thread

If the thread is blocked by calling wait(), sleep() or join() method, you can interrupt the thread and wake it up by throwing InterruptedException; if the thread encounters IO blocking, there is nothing you can do, because IO is the operating system Implemented, the Java code has no way to directly touch the operating system.

36. How do immutable objects help with multithreading

As mentioned above, immutable objects ensure the memory visibility of objects, and the reading of immutable objects does not require additional synchronization methods, which improves the efficiency of code execution.

37. What is multi-threaded context switching

Multi-threaded context switching refers to the process of switching CPU control from a thread that is already running to another thread that is ready and waiting to acquire CPU execution rights.

38. What happens if the thread pool queue is full when you submit a task

Here’s a distinction:

(1) If you are using an unbounded queue LinkedBlockingQueue, that is, an unbounded queue, it doesn’t matter, continue to add tasks to the blocking queue for execution, because LinkedBlockingQueue can be considered as an infinite queue, which can store tasks indefinitely

(2) If a bounded queue such as ArrayBlockingQueue is used, the task will first be added to the ArrayBlockingQueue. When the ArrayBlockingQueue is full, the number of threads will be increased according to the value of maximumPoolSize. It will use the rejection policy RejectedExecutionHandler to process the full task, the default is AbortPolicy

39. What is the thread scheduling algorithm used in Java

Preemptive. After a thread runs out of CPU, the operating system will calculate a total priority according to the thread priority, thread starvation and other data and assign the next time slice to a thread for execution.

40. What is Thread Scheduler and Time Slicing?

The thread scheduler is an operating system service that is responsible for allocating CPU time to threads in the Runnable state. Once we create a thread and start it, its execution depends on the implementation of the thread scheduler. Time slicing is the process of allocating available CPU time to available Runnable threads. Allocating CPU time can be based on thread priority or the amount of time a thread waits. Thread scheduling is not under the control of the Java virtual machine, so it is better to let the application control it (ie don’t make your program depend on thread priorities).

41. What is spin

A lot of the code in synchronized is just some very simple code, the execution time is very fast, the thread waiting at this time

All locking may be an operation that is not worthwhile, because thread blocking involves the problem of switching between user mode and kernel mode. Since the code in synchronized executes very fast, it is better not to block the thread waiting for the lock, but to do a busy loop on the boundary of synchronized, which is spin. If you do a lot of busy loops and find that the lock has not been acquired, and then block, this may be a better strategy.

42. What is the Lock interface in the Java Concurrency API? What advantages does it have over synchronization?

The Lock interface provides more extensible locking operations than synchronized methods and synchronized blocks. They allow for more flexible structures, can have completely different properties, and can support conditional objects of multiple related classes.

Its advantages are:

(1) It can make the lock more fair

(2) Threads can be made to respond to interrupts while waiting for locks

(3) You can let the thread try to acquire the lock and return immediately or wait for a period of time when the lock cannot be acquired

(4) Locks can be acquired and released in different scopes and in different orders

43. The thread safety of the singleton pattern

The old-fashioned question, the first thing to say is that the thread safety of the singleton pattern means that an instance of a class will only be created once in a multi-threaded environment. There are many ways to write the singleton pattern. Let me summarize:

(1) Writing of Hungry Chinese Singleton Pattern: Thread Safety

(2) Writing the lazy singleton pattern: not thread-safe

(3) The writing method of the double-checked lock singleton mode: thread safety

44. What does Semaphore do?

Semaphore is a semaphore, its role is to limit the number of concurrent blocks of code. Semaphore has a constructor that can pass in an int integer n, which means that a certain piece of code can only be accessed by n threads at most. If it exceeds n, please wait until a thread finishes executing this code block, and the next thread Re-enter. It can be seen from this that if the int type integer n=1 passed in the Semaphore constructor is equivalent to becoming a synchronized.

45. What is the Executors class?

The class provides some utility methods. Executors can be used to easily create thread pools

46. ​​Which thread calls the constructor and static block of the thread class

This is a very tricky and cunning question. Remember: the constructor and static block of the thread class are called by the thread where the new thread class is located, and the code in the run method is called by the thread itself.

If the above statement confuses you, then let me give an example, assuming that Thread1 is new in Thread2, main

Thread2 is new in the function, then:

(1) The construction method and static block of Thread2 are called by the main thread, and the run() method of Thread2 is called by Thread2 itself

(2) The construction method and static block of Thread1 are called by Thread2, and the run() method of Thread1 is called by Thread1 itself

47. Which is the better choice, synchronization method or synchronization block?

Synchronized block, which means that the code outside the synchronized block is executed asynchronously, which is more efficient than synchronizing the entire method. Please know a rule: the smaller the scope of synchronization, the better.

  1. What abnormality will be caused by too many Java threads?
    (1) The life cycle overhead of threads is very high

(2) Excessive CPU resource consumption

If the number of runnable threads is more than the number of available processors, then some threads will be idle. A large number of idle threads will take up a lot of memory, put pressure on the garbage collector, and there will be other performance overhead when a large number of threads compete for CPU resources.

(3) Reduce stability

The JVM has a limit on the number of threads that can be created. This limit will vary from platform to platform and is subject to multiple factors, including the JVM startup parameters, the size of the request stack in the Thread constructor, and the underlying operations. System restrictions on threads, etc. If these restrictions are violated, an OutOfMemoryError may be thrown.

Summarize

Due to the limited space and content, this sharing is here! If the article was helpful to you,
The above information has been organized and packaged by the blogger, the map of these knowledge points and the answers to the questions
Detailed PDF documents can be shared with everyone for free.
After liking and collecting the article, private message [Data] to get it for free!

Leave a Comment

Your email address will not be published. Required fields are marked *