Redis使用教程

Linux redis安装

就是c语言写的,内存数据库,可以认为就是几个比较通用的数据结构的封装,接口是Socket接口,不是SO那种可以直接链接进来用的。

c++静态库:会合入到最终生成的程序,使得结果文件比较大。优点是不再有任何依赖。
c++动态库:动态库,一个文件可以多个代码同时使用内存中只有一份,节省内存,可以随主代码一起编译。缺点是需要头文件。

安装依赖

1
yum install -y gcc 

下载压缩包

1
2
wget http://download.redis.io/releases/redis-5.0.3.tar.gz
tar -zxvf redis-5.0.3.tar.gz

编译

1
2
cd redis-5.0.3
make

指定目录安装

1
make install PREFIX=/usr/local/redis

启动服务

1
2
cd /usr/local/redis/bin/
./redis-server # ./redis-server redis.conf

后台启动

cp /usr/local/redis-5.0.3/redis.conf /usr/local/redis/bin/
修改 redis.conf文件 daemonize no 改为 daemonize yes
执行启动命令./redis-server redis.conf

配置

运行外部服务器访问,修改redis.conf
bind 127.0.0.1bind 0.0.0.0
或者启动命令加上--protected-mode no

firewall-cmd --zone=public --add-port=6379/tcp --permanent firewall-cmd --reload防火墙放行
端口号修改掉不要用默认6379属于高危端口 也可以统一加个1-2万,并且设置密码。redis.conf中添加requirepass sdfsd1f215

日志,修改redis.conflogfile "/var/log/redis/redis.log" 需要创建/var/log/redis目录,并且授权,可以设置在reids安装目录,pid也一样放在安装目录。

登录:./redis-cli -p 6379 auth sdfsd1f215 ping

扩展说明:
日志、pid、rdb文件工作目录都可以配置到redis安装目录/usr/local/redis/ls下面,否则如果非root权限启动可能导致无创建文件的权限问题。

1
2
3
pidfile /usr/local/redis/redis.pid
logfile "/usr/local/redis/redis.log"
dir /usr/local/redis/bin

设置开机启动

方式一:

vi /etc/rc.d/rc.local 添加如下代码:
/usr/local/redis/bin/redis-server /usr/local/redis/redis.conf

性能测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
./src/redis-benchmark -h 127.0.0.1 -r 1000000 -n 2000000 -t get,set,lpush,lpop -q #不使用pipline
SET: 65817.62 requests per second
GET: 64865.57 requests per second
LPUSH: 68934.62 requests per second
LPOP: 68728.52 requests per second

./src/redis-benchmark -h 127.0.0.1 -p 6379 -n 100000 -c 50 -q
./src/redis-benchmark -h 127.0.0.1 -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q # 使用pipline
SET: 519615.50 requests per second
GET: 621697.25 requests per second
LPUSH: 605510.12 requests per second
LPOP: 602591.12 requests per second

./src/redis-benchmark -help # 查看具体参数说明

pipline快很多:
A Request/Response server can be implemented so that it is able to process new requests even if the client hasn’t already read the old responses. This way it is possible to send multiple commands to the server without waiting for the replies at all, and finally read the replies in a single step.

While the client sends commands using pipelining, the server will be forced to queue the replies, using memory. So if you need to send a lot of commands with pipelining, it is better to send them as batches each containing a reasonable number, for instance 10k commands, read the replies, and then send another 10k commands again, and so forth. The speed will be nearly the same, but the additional memory used will be at max the amount needed to queue the replies for these 10k commands.

Pipelining is not just a way to reduce the latency cost associated with the round trip time, it actually greatly improves the number of operations you can perform per second in a given Redis server. This is the result of the fact that, without using pipelining, serving each command is very cheap from the point of view of accessing the data structures and producing the reply, but it is very costly from the point of view of doing the socket I/O. This involves calling the read() and write() syscall, that means going from user land to kernel land. The context switch is a huge speed penalty.

When pipelining is used, many commands are usually read with a single read() system call, and multiple replies are delivered with a single write() system call. Because of this, the number of total queries performed per second initially increases almost linearly with longer pipelines, and eventually reaches 10 times the baseline obtained without pipelining.

After all if both the Redis process and the benchmark are running in the same box, isn’t this just copying messages in memory from one place to another without any actual latency or networking involved?

The reason is that processes in a system are not always running, actually it is the kernel scheduler that let the process run, so what happens is that, for instance, the benchmark is allowed to run, reads the reply from the Redis server (related to the last command executed), and writes a new command. The command is now in the loopback interface buffer, but in order to be read by the server, the kernel should schedule the server process (currently blocked in a system call) to run, and so forth. So in practical terms the loopback interface still involves network-like latency, because of how the kernel scheduler works.

Basically a busy loop benchmark is the silliest thing that can be done when metering performances in a networked server. The wise thing is just avoiding benchmarking in this way.

Loopback 接口,也叫回环口,是一种虚拟的接口。可以ping 一下127.0.0.1 redhat6.5大概0.012毫秒的延时,12微妙,所以理论上1秒钟最大可以完成10万包,内网ping其他服务器是0.1毫秒。在这台服务器上安装的Centos7.5虚拟机上测试是0.02毫秒

Loopback回环口为什么快?
You don’t mention a particular OS but for most all that happens is that the data travels down the stack until it gets to IP at which point it’s pretty much sent back. That’s a massive oversimplification but means that the entire process is usually CPU bound so its performance is therefore directly linked to CPU speed plus stack efficiency. In practical terms modern CPUs and OSs should be able to ‘bounce’ loopback traffic considerably faster than 40Gbps - which is the fastest NIC I think I’m capable of buying today. Hope this helps.
All 127.xx.xx.xx traffic never hits the physical network, it gets processed by a loop back adapter in the kernel.
Loopback测试,我windows10回环口测试是0.1毫秒,也就是100微秒一个ping。比linux慢10倍。但是微软官网给的WS 2012 RIO poll Yes 3 333,000 就是3微妙,33个包的测试结果,On this particular machine, a desktop class 6-core AMD 3.2 GHz processor , the time stamp returned from QueryPerformanceCounter has a resolution of 319 nanoseconds and is derived from the CPU timestamp counter (TSC) register(说明获取时间戳需要319纳秒)。
pipline:使用了批量发送命令,减少了IO内核态与用户态的切换次数,就是不需要发送一条命令调用一下write切换一次,也不需要等上一条命令返回才下发下一条,所以速度非常快。
TCP_NODELAY关闭方法sysctl -w net.ipv4.tcp_low_latency=0或者echo 0 > /proc/sys/net/ipv4/tcp_low_latency或者程序中设置,默认是关闭的, 查看方法 cat /proc/sys/net/ipv4/tcp_low_latency

线程安全

redis实际上是采用了线程封闭的观念,把任务封闭在一个线程,自然避免了线程安全问题,不过对于需要依赖多个redis操作的复合操作来说,依然需要锁,而且有可能是分布式锁。

1.Redis为什么这么快?

原因有以下几点:
a.基于内存操作:Redis的所有数据都存在内存中,因此所有的运算都是内存级别的,所以它的性能比较高。
b.数据结构简单:Redis的数据结构比较简单,是为Redis专门设计的,而这些简单的数据结构的查找和操作的时间复杂度都是O(1)。
c.多路复用和非阻塞IO:Redis使用IO多路复用功能来监听多个socket连接的客户端,这样就可以使用一个线程来处理多个情况,从而减少线程切换带来的开销,同时也避免了IO阻塞操作,从而大大提高了Redis的性能。
d.避免上下文切换:因为是单线程模型,因此就避免了不必要的上下文切换和多线程竞争,这就省去了多线程切换带来的时间和性能上的开销,而且单线程不会导致死锁的问题发生。
官方使用的基准测试结果表明,单线程的Redis可以达到10W/S的吞吐量。

Redis6.0中的多线程?

Redis单线程的优点非常,不但降低了Redis内部实现的负责性,也让所有操作都可以在无锁的情况下进行,并且不存在死锁和线程切换带来的性能以及时间上的消耗;但是其缺点也很明显,单线程机制导致Redis的QPS(Query Per Second,每秒查询数)很难得到有效的提高(虽然够快了,但人毕竟还是要有更高的追求的)。
Redis在4.0版本中虽然引入了多线程,但是此版本的多线程只能用于大数据量的异步删除,对于非删除操作的意义并不是很大。
如果我们使用Redis多线程就可以分摊Redis同步读写IO的压力,以及充分利用多核CPU资源,并且可以有效的提升Redis的QPS。在Redis中虽然使用了IO多路复用,并且是基于非阻塞的IO进行操作的,但是IO的读写本身是阻塞的。比如当socket中有数据时,Redis会先将数据从内核态空间拷贝到用户态空间,然后再进行相关操作,而这个拷贝过程是阻塞的,并且当数据量越大时拷贝所需要的的时间就越多,而这些操作都是基于单线程完成的。
因此在Redis6.0中新增了多线程的功能来提高IO的读写性能,它的主要实现思路是将主线程的IO读写任务拆分给一组独立的线程去执行,这样就可以使用多个socket的读写并行化了,但Redis的命令依旧是主线程串行执行的。
但是注意:Redis6.0是默认禁用多线程的,但可以通过配置文件redis.conf中的io-threads-do-reads 等于 true 来开启。但是还不够,除此之外我们还需要设置线程的数量才能正确地开启多线程的功能,同样是修改Redis的配置,例如设置 io-threads 4,表示开启4个线程。

指令

执行一个处理器指令可能需要的时钟周期不是固定的,有流水线技术,一般是2-3个时钟周期,测试命令性能也就是看执行这个命令需要多少CPU指令,需要多少时钟周期。
可以试着用gdb调试下linux的ping命令

计算机中时钟周期是(主频的倒数),一个时钟周期cpu仅完成一个最基本的动作,完成一个基本操作的时间为机器周期,一般由几个时钟周期组成;完成一条指令为指令周期。一般由几个机器周期组成,指令不同机器周期数也不同。
3.2G 为例,也就是每秒32*1000*1000*1000=32亿个时钟周期,机器周期由两个时钟周期组成,平均三个机器周期完成一条指令(这要假设)。可以使用网络上的秒与纳秒的换算工具计算
时钟周期为1/(3.2*1000*1000*1000)=0.31ns 机器周期为0.31*2=0.62ns
平均指令周期3*0.62ns=1.86ns 1.876ns=1.86E-9
平均指令执行速度为1/(1.86ns)=537.63MIPS(百万条指令每秒) 也就是5亿条每秒也就是32/2/3=5.3亿条,每纳秒0.5条指令,获取时间戳用了315纳秒也就是执行了157条指令,3微秒执行1570条指令,12微秒就是6280条指令,6280*2*3=37680个时钟周期。

时钟周期→机器周期→指令周期。
而一个典型的指令周期包含“取指周期”、“间址取操作数周期”、“执行周期”、“中断周期处理中断”这四个机器周期,每个指令周期需要不同的时钟周期,CPU在每个时钟周期生成节拍和控制信号,通过对其他元器件的控制完成数据的流动和处理。
指令周期常常有若干个CPU周期,CPU周期也称为机器周期,由于CPU访问一次内存所花费的时间较长,因此通常用内存中读取一个指令字的最短时间来规定CPU周期。这就是说,一条指令取出阶段(通常为取指)需要一个CPU周期时间。而一个CPU周期时间又包含若干个时钟周期(通常为节拍脉冲或T周期,它是处理操作的最基本的单位)。这些时钟周期的总和则规定了一个CPU周期的时间宽度。假如一条指令是5个时钟周期完成,对于500MHZ的处理器串行运行指令,1秒内取指100000000次。因此处理器引入了流水线技术,将一条指令划分为多个功能,由不同的功能部件来执行,并且这些功能部件可以并行工作。流水线划分为取指 译码 执行,但并不是仅需3个时钟周期即执行完指令。因为执行单元模块的操作较多,可能需要多个周期,取指 译码一般是一个时钟周期,这样可以看出虽然一条指令完成需要多个时钟周期,但是总体来说看在每个时钟周期都有一条指令开始取指。如果我们的处理器是500MHZ,则1秒内取指了500000000次。不同的处理器设计时流水线级数不一样,现在主流的有三级 五级 七级,增加流水线级数,简化流水线的各级逻辑,可以提高处理器的性能。

context switch

In computing, a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point. This allows multiple processes to share a single central processing unit (CPU), and is an essential feature of a multitasking operating system.The precise meaning of the phrase “context switch” varies. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed.
A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move between user mode and kernel mode tasks. The process of context switching can have a negative impact on system performance.

Cost

Context switches are usually computationally intensive, and much of the design of operating systems is to optimize the use of context switches. Switching from one process to another requires a certain amount of time for doing the administration – saving and loading registers and memory maps, updating various tables and lists, etc. What is actually involved in a context switch depends on the architectures, operating systems, and the number of resources shared (threads that belong to the same process share many resources whether compared to unrelated non-cooperating processes. For example, in the Linux kernel, context switching involves switching registers, stack pointer (it’s typical stack-pointer register), program counter, flushing the translation lookaside buffer (TLB) and loading the page table of the next process to run (unless the old process shares the memory with the new).[2][3] Furthermore, analogous context switching happens between user threads, notably green threads, and is often very lightweight, saving and restoring minimal context. In extreme cases, such as switching between goroutines in Go, a context switch is equivalent to a coroutine yield, which is only marginally more expensive than a subroutine call.

User and kernel mode switching

When the system transitions between user mode and kernel mode, a context switch is not necessary; a mode transition is not by itself a context switch. However, depending on the operating system, a context switch may also take place at this time.

Example

Considering a general arithmetic addition operation A = B+1. The instruction is stored in the instruction register and the program counter is incremented. A and B are read from memory and are stored in registers R1, R2 respectively. In this case, B+1 is calculated and written in R1 as the final answer. This operation as there are sequential reads and writes and there’s no waits for function calls used, hence no context switch/wait takes place in this case.

However, certain special instructions require system calls that require context switch to wait/sleep processes. A system call handler is used for context switch to kernel mode. A display(data x) function may require data x from the Disk and a device driver in kernel mode, hence the display() function goes to sleep and waits on the READ operation to get the value of x from the disk, causing the program to wait and a wait for function call to the released setting the current statement to go to sleep and wait for the syscall to wake it up. To maintain concurrency however the program needs to re-execute the new value and the sleeping process together again.

Performance

Context switching itself has a cost in performance, due to running the task scheduler, TLB flushes, and indirectly due to sharing the CPU cache between multiple tasks.[4] Switching between threads of a single process can be faster than between two separate processes, because threads share the same virtual memory maps, so a TLB flush is not necessary.[5]
根据参考中一个老外的测试结果How long does it take to make a context switch?纯粹系统调用也就是用户态与内核态切换的简单测试结果:
syscalls
Intel 5150: 105ns/syscall
Intel X5550: 52ns/syscall (相当于只需要执行几十条指令,其他复杂的系统调用用时会更长,CPU差的话也会更长)
with futex
Intel 5150: ~4300ns/context switch
Intel X5550: ~3000ns/context switch
CPU affinity 多核
Intel 5150: ~1900ns/process context switch, ~1700ns/thread context switch
Intel X5550: ~1300ns/process context switch, ~1100ns/thread context switch

内核态用户态需要消耗时间,用户态内核态间的数据拷贝也需要时间

其他

安装GCC

  1. rpm包下载https://vault.centos.org/7.0.1406/os/x86_64/Packages/,gcc4.8.2版本
    mpfr-3.1.1-4.el7.x86_64.rpm
    libmpc-1.0.1-3.el7.x86_64.rpm
    kernel-headers-3.10.0-123.el7.x86_64.rpm
    glibc-utils-2.17-55.el7.x86_64.rpm
    glibc-static-2.17-55.el7.x86_64.rpm
    glibc-headers-2.17-55.el7.x86_64.rpm
    glibc-common-2.17-55.el7.x86_64.rpm
    glibc-devel-2.17-55.el7.x86_64.rpm
    glibc-2.17-55.el7.x86_64.rpm
    cpp-4.8.2-16.el7.x86_64.rpm
    gcc-4.8.2-16.el7.x86_64.rpm
  2. rpm -Uvh *.rpm –nodeps –force
  3. 验证gcc -v

make 还是会报错:zmalloc.h:50:31: 致命错误:jemalloc/jemalloc.h:没有那个文件或目录 #include <jemalloc/jemalloc.h>
解决方法:make MALLOC=libc

安全考虑

可以类似mysql建立一个禁止登录的账号负责启动redis服务。可以将其shell设置为/sbin/nologin或者/bin/false/bin/false是最严格的禁止login选项,一切服务都不能用。
/sbin/nologin只是不允许login系统

1
2
3
4
5
6
7
usermod -s /sbin/nologin new_username # 修改
useradd -s /sbin/nologin new_username # 新建
groupadd apache
useradd -d /usr/local/apache -g apache -s /bin/false apache # 不指定group的话默认会创建同名的group
id apache
cat /etc/passwd
cat /etc/group

做成服务启动

  1. 创建/etc/systemd/system/redis.service文件:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    [Unit]
    Description=redis-server
    After=network.target

    [Service]
    User=redis
    Group=redis
    Type=forking
    LimitCORE=infinity
    LimitNOFILE=100000
    LimitNPROC=100000
    ExecStart=/usr/local/redis/bin/redis-server /usr/local/redis/bin/redis.conf
    PrivateTmp=true

    [Install]
    WantedBy=multi-user.target

1
2
3
4
5
6
7
groupadd redis
useradd -d /usr/local/redis -g redis -s /bin/false redis
chown redis.redis /usr/local/redis -R
systemctl daemon-reload
systemctl enable redis
systemctl start redis
systemctl status redis

修改配置文件的pidfile,logfile以及dir的位置并且授权。目录文件要事先建立好。pid放在var/run目录 redis.pid,日志放在/var/log/redis目录 redis.log。 service中不要配置PIDfile可能导致问题
比如在/var/log/message中看到pid not readable (yet?) after start错误。

查看pid,通过pid获取程序绝对路径

reids的pid默认目录/var/run/redis.pid,服务启动的时候创建,关闭的时候自动删除。

1
2
3
4
5
netstat -nlp
cd /proc/pid # cd /proc/12332
ll # exec有绝对路径

find / -name *.pid # 查看pid文件

持久化

RDB快照

RDB 持久化可以在指定的时间间隔内生成数据集的时间点快照(point-in-time snapshot)。

既然数据都在内存中存放着,最简单的就是遍历一遍把它们全都写入文件中。宕机时新产生的数据若为备份,数据就会丢失

dbfilename dump.rdb,多个条件可以组合使用,只要如下一个条件满足,就进行备份

对于相同的数据集来说,AOF 文件的体积通常要大于 RDB 文件的体积

Redis也允许我们同时使用两种方式,再重启redis后会从AOF中恢复数据,因为AOF比RDB数据损失小嘛。

Redis 还可以同时使用 AOF 持久化和 RDB 持久化。 在这种情况下, 当 Redis 重启时, 它会优先使用 AOF 文件来还原数据集, 因为 AOF 文件保存的数据集通常比 RDB 文件所保存的数据集更完整。你甚至可以关闭持久化功能,让数据只在服务器运行时存在。

AOF增量

把我执行的所有写入命令都记录下来,专门写入了一个文件,并给这种持久化方式也取了一个名字:AOF(Append Only File)

AOF 持久化记录服务器执行的所有写操作命令,并在服务器启动时,通过重新执行这些命令来还原数据集。 AOF 文件中的命令全部以 Redis 协议的格式来保存,新命令会被追加到文件的末尾。 Redis 还可以在后台对 AOF 文件进行重写(rewrite),使得 AOF 文件的体积不会超出保存数据集状态所需的实际大小。

一般来说,如果想达到足以媲美 PostgreSQL 的数据安全性, 你应该同时使用两种持久化功能。如果你非常关心你的数据,但仍然可以承受数分钟以内的数据丢失, 那么你可以只使用 RDB 持久化。有很多用户都只使用 AOF 持久化, 但我们并不推荐这种方式: 因为定时生成 RDB 快照(snapshot)非常便于进行数据库备份, 并且 RDB 恢复数据集的速度也要比 AOF 恢复的速度要快。

优化

不要用*查询key。可以增加一个keySet集合单独存放key。

其他问题

WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot or run the command ‘sysctl vm.overcommit_memory=1’ for this to take effect.
WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command ‘echo madvise > /sys/kernel/mm/transparent_hugepage/enabled’ as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to ‘madvise’ or ‘never’).

按说明进行修改,echo 511 > /proc/sys/net/core/somaxconn永久修改:

1
2
3
4
vi /etc/sysctl.conf # 编辑
net.core.somaxconn=511 # 添加

sysctl -p # 执行

Failed opening the RDB file dump.rdb (in server root dir /) for saving: Permission denied
需要权限。获取明确配置有权限的路径

1
2
3
4
5
6
7
8
9
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /usr/local/redis/

WARNING Your kernel has a bug that could lead to data corruption during background save. Please upgrade to the latest stable kernel.
需要取消配置文件中最后ignore-warnings ARM64-COW-BUG的注释

用法思考

单独一个定时任务维护缓存数据,可以利用有效期,如果是定时同步,其实可以给每个key设置个有效期,大于同步周期。数据全部用put更新缓存,这样被删除的数据由于同步周期到了没有更新操作就会被自动删除掉,不需要认为判断维护哪些被删除了。要保证实效性肯定是增删改查到时候都直接进行redis操作。类似于tomcat的session机制。
或者用的时候增加到缓存并且设置超时时间,拿不到就去数据库取,还可以增加立即生效按钮,如果点击立即生效就把这些当做过期删除掉。

命名 使用:做分隔符按业务功能划分

sortedSet 有序,而且顺序权重进行删除

使用的时候先从缓存中获取如果不存在则从数据库中获取并且更新到缓存。需要设置有效期5分钟五分钟后必须重新从数据库中获取,每次获取查询操作不更新时间,避免数据库数据变更后缓存不同步。当然也可以不用有效期时间,但是得数据库有操作的时候触发同步到内存。没有值得项也得缓存,避免缓存穿透。

异常

io.lettuce.core.RedisCommandTimeoutException

Springboot连接Redis超时问题解决。springboot 2.x 默认采用了lettuce作为连接池,但是lettuce是不会进行“心跳”操作的,也就是说,它不会保持连接,导致了连接超时。改成jedis。
看代码有,具体没仔细研究,这个超时应该是客户端自己爆出来的,可能是客户端获取连接在指定时间内没获取到,或者某操作在指定时间内没有完成导致的:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
/**
* Create a timeout exception (specifically, {@link SQLTransientConnectionException}) to be thrown, because a
* timeout occurred when trying to acquire a Connection from the pool. If there was an underlying cause for the
* timeout, e.g. a SQLException thrown by the driver while trying to create a new Connection, then use the
* SQL State from that exception as our own and additionally set that exception as the "next" SQLException inside
* of our exception.
*
* As a side-effect, log the timeout failure at DEBUG, and record the timeout failure in the metrics tracker.
*
* @param startTime the start time (timestamp) of the acquisition attempt
* @return a SQLException to be thrown from {@link #getConnection()}
*/
private SQLException createTimeoutException(long startTime)
{
}

/**
* Get a connection from the pool, or timeout after the specified number of milliseconds.
*
* @param hardTimeout the maximum time to wait for a connection from the pool
* @return a java.sql.Connection instance
* @throws SQLException thrown if a timeout occurs trying to obtain a connection
*/
public Connection getConnection(final long hardTimeout) throws SQLException
{
suspendResumeLock.acquire();
final long startTime = currentTime();

try {
long timeout = hardTimeout;
do {
PoolEntry poolEntry = connectionBag.borrow(timeout, MILLISECONDS);
if (poolEntry == null) {
break; // We timed out... break and throw exception
}

final long now = currentTime();
if (poolEntry.isMarkedEvicted() || (elapsedMillis(poolEntry.lastAccessed, now) > aliveBypassWindowMs && !isConnectionAlive(poolEntry.connection))) {
closeConnection(poolEntry, poolEntry.isMarkedEvicted() ? EVICTED_CONNECTION_MESSAGE : DEAD_CONNECTION_MESSAGE);
timeout = hardTimeout - elapsedMillis(startTime);
}
else {
metricsTracker.recordBorrowStats(poolEntry, startTime);
return poolEntry.createProxyConnection(leakTaskFactory.schedule(poolEntry), now);
}
} while (timeout > 0L);

metricsTracker.recordBorrowTimeoutStats(startTime);
throw createTimeoutException(startTime);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new SQLException(poolName + " - Interrupted during connection acquisition", e);
}
finally {
suspendResumeLock.release();
}
}

参考

看官方文档非常重要很多东西都以及考虑进去。