您的位置:首页 > 数据库 > Redis

Redis安装整理(window平台和Linux平台)

2014-03-05 11:13 726 查看


Redis安装整理(window平台和Linux平台)

博客分类:

云计算

window平台Redis安装

redis windows安装文件下载地址
:http://code.google.com/p/servicestack/wiki/RedisWindowsDownload#Download_32bit_Cygwin_builds_for_Windows

我选择的redis为最新版的安装文件,见下图:



Redis安装文件解压后,有以下几个文件。见下图



redis-server.exe:服务程序

redis-check-dump.exe:本地数据库检查

redis-check-aof.exe:更新日志检查

redis-benchmark.exe:性能测试,用以模拟同时由N个客户端发送M个 SETs/GETs 查询 (类似于 Apache 的ab 工具).

在解压好redis的安装文件到E:\根目录后,还需要在redis根目录增加一个redis的配置文件redis.conf,文件具体内容附件中有,不过这里我仍然把配置文件的内容贴上来:

Java代码



# Redis configuration file example

# By default Redis does not run as a daemon. Use 'yes' if you need it.

# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.

daemonize no

# When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.

# You can specify a custom pid file location here.

pidfile /var/run/redis.pid

# Accept connections on the specified port, default is 6379

port 6379

# If you want you can bind a single interface, if the bind option is not

# specified all the interfaces will listen for connections.

#

# bind 127.0.0.1

# Close the connection after a client is idle for N seconds (0 to disable)

timeout 300

# Set server verbosity to 'debug'

# it can be one of:

# debug (a lot of information, useful for development/testing)

# notice (moderately verbose, what you want in production probably)

# warning (only very important / critical messages are logged)

loglevel debug

# Specify the log file name. Also 'stdout' can be used to force

# the demon to log on the standard output. Note that if you use standard

# output for logging but daemonize, logs will be sent to /dev/null

logfile stdout

# Set the number of databases. The default database is DB 0, you can select

# a different one on a per-connection basis using SELECT <dbid> where

# dbid is a number between 0 and 'databases'-1

databases 16

################################ SNAPSHOTTING #################################

#

# Save the DB on disk:

#

# save <seconds> <changes>

#

# Will save the DB if both the given number of seconds and the given

# number of write operations against the DB occurred.

#

# In the example below the behaviour will be to save:

# after 900 sec (15 min) if at least 1 key changed

# after 300 sec (5 min) if at least 10 keys changed

# after 60 sec if at least 10000 keys changed

save 900 1

save 300 10

save 60 10000

# Compress string objects using LZF when dump .rdb databases?

# For default that's set to 'yes' as it's almost always a win.

# If you want to save some CPU in the saving child set it to 'no' but

# the dataset will likely be bigger if you have compressible values or keys.

rdbcompression yes

# The filename where to dump the DB

dbfilename dump.rdb

# For default save/load DB in/from the working directory

# Note that you must specify a directory not a file name.

dir ./

################################# REPLICATION #################################

# Master-Slave replication. Use slaveof to make a Redis instance a copy of

# another Redis server. Note that the configuration is local to the slave

# so for example it is possible to configure the slave to save the DB with a

# different interval, or to listen to another port, and so on.

#

# slaveof <masterip> <masterport>

# If the master is password protected (using the "requirepass" configuration

# directive below) it is possible to tell the slave to authenticate before

# starting the replication synchronization process, otherwise the master will

# refuse the slave request.

#

# masterauth <master-password>

################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other

# commands. This might be useful in environments in which you do not trust

# others with access to the host running redis-server.

#

# This should stay commented out for backward compatibility and because most

# people do not need auth (e.g. they run their own servers).

#

# requirepass foobared

################################### LIMITS ####################################

# Set the max number of connected clients at the same time. By default there

# is no limit, and it's up to the number of file descriptors the Redis process

# is able to open. The special value '0' means no limts.

# Once the limit is reached Redis will close all the new connections sending

# an error 'max number of clients reached'.

#

# maxclients 128

# Don't use more memory than the specified amount of bytes.

# When the memory limit is reached Redis will try to remove keys with an

# EXPIRE set. It will try to start freeing keys that are going to expire

# in little time and preserve keys with a longer time to live.

# Redis will also try to remove objects from free lists if possible.

#

# If all this fails, Redis will start to reply with errors to commands

# that will use more memory, like SET, LPUSH, and so on, and will continue

# to reply to most read-only commands like GET.

#

# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a

# 'state' server or cache, not as a real DB. When Redis is used as a real

# database the memory usage will grow over the weeks, it will be obvious if

# it is going to use too much memory in the long run, and you'll have the time

# to upgrade. With maxmemory after the limit is reached you'll start to get

# errors for write operations, and this may even lead to DB inconsistency.

#

# maxmemory <bytes>

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. If you can live

# with the idea that the latest records will be lost if something like a crash

# happens this is the preferred way to run Redis. If instead you care a lot

# about your data and don't want to that a single record can get lost you should

# enable the append only mode: when this mode is enabled Redis will append

# every write operation received in the file appendonly.log. This file will

# be read on startup in order to rebuild the full dataset in memory.

#

# Note that you can have both the async dumps and the append only file if you

# like (you have to comment the "save" statements above to disable the dumps).

# Still if append only mode is enabled Redis will load the data from the

# log file at startup ignoring the dump.rdb file.

#

# The name of the append only file is "appendonly.log"

#

# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append

# log file in background when it gets too big.

appendonly no

# The fsync() call tells the Operating System to actually write data on disk

# instead to wait for more data in the output buffer. Some OS will really flush

# data on disk, some other OS will just try to do it ASAP.

#

# Redis supports three different modes:

#

# no: don't fsync, just let the OS flush the data when it wants. Faster.

# always: fsync after every write to the append only log . Slow, Safest.

# everysec: fsync only if one second passed since the last fsync. Compromise.

#

# The default is "always" that's the safer of the options. It's up to you to

# understand if you can relax this to "everysec" that will fsync every second

# or to "no" that will let the operating system flush the output buffer when

# it want, for better performances (but if you can live with the idea of

# some data loss consider the default persistence mode that's snapshotting).

appendfsync always

# appendfsync everysec

# appendfsync no

############################### ADVANCED CONFIG ###############################

# Glue small output buffers together in order to send small replies in a

# single TCP packet. Uses a bit more CPU but most of the times it is a win

# in terms of number of queries per second. Use 'yes' if unsure.

glueoutputbuf yes

# Use object sharing. Can save a lot of memory if you have many common

# string in your dataset, but performs lookups against the shared objects

# pool so it uses more CPU and can be a bit slower. Usually it's a good

# idea.

#

# When object sharing is enabled (shareobjects yes) you can use

# shareobjectspoolsize to control the size of the pool used in order to try

# object sharing. A bigger pool size will lead to better sharing capabilities.

# In general you want this value to be at least the double of the number of

# very common strings you have in your dataset.

#

# WARNING: object sharing is experimental, don't enable this feature

# in production before of Redis 1.0-stable. Still please try this feature in

# your development environment so that we can test it better.

# shareobjects no

# shareobjectspoolsize 1024

将附件中的redis_conf.rar解压下来放到redis的根目录中即可。到此,redis的安装已经完毕。下面开始使用redis数据库。

启动redis:

输入命令:redis-server.exe redis.conf

启动后如下图所示:



启动cmd窗口要一直开着,关闭后则Redis服务关闭。

这时服务开启着,另外开一个窗口进行,设置客户端:

输入命令:redis-cli.exe -h 202.117.16.133 -p 6379

输入后如下图所示:



然后可以开始玩了:

设置一个Key并获取返回的值:

Java代码



$ ./redis-cli set mykey somevalue

OK

$ ./redis-cli get mykey

Somevalue

如何添加值到list:

Java代码



$ ./redis-cli lpush mylist firstvalue

OK

$ ./redis-cli lpush mylist secondvalue

OK

$ ./redis-cli lpush mylist thirdvalue

OK

$ ./redis-cli lrange mylist 0 -1

. thirdvalue

. secondvalue

. firstvalue

$ ./redis-cli rpop mylist

firstvalue

$ ./redis-cli lrange mylist 0 -1

. thirdvalue

. secondvalue

redis-benchmark.exe:性能测试,用以模拟同时由N个客户端发送M个 SETs/GETs 查询 (类似于 Apache 的 ab 工具).

Java代码



./redis-benchmark -n 100000 –c 50

====== SET ======

100007 requests completed in 0.88 seconds (译者注:100004 查询完成于 1.14 秒 )

50 parallel clients (译者注:50个并发客户端)

3 bytes payload (译者注:3字节有效载荷)

keep alive: 1 (译者注:保持1个连接)

58.50% <= 0 milliseconds(译者注:毫秒)

99.17% <= 1 milliseconds

99.58% <= 2 milliseconds

99.85% <= 3 milliseconds

99.90% <= 6 milliseconds

100.00% <= 9 milliseconds

114293.71 requests per second(译者注:每秒 114293.71 次查询)

Windows下测试并发客户端极限为60

========================================================================

linux平台Redis安装:

Java代码



wget http://redis.googlecode.com/files/redis-2.0.0-rc4.tar.gz
tar xvzf redis-2.0.4.tar.gz

cd redis-2.0.4

make

mkdir /home/redis

cp redis-server /home/redis

cp redis-benchmark /home/redis

cp redis-cli /home/redis

cp redis.conf /home/redis

cd /home/redis

在安装过程中可能需要用到sudo命令,可能新装的redhat虚拟机中新用户还不能使用sudo命令,因此需要手动的修改/etc/sudoers文件,命令如下:

Java代码



cd /etc

su root ##切换为root用户,同时输入密码

chmod u+w sudoers ##放开sudoers文件的写权限

##在root ALL = (ALL) ALL下面一行增加 "你的用户名" ALL = (ALL) ALL

:wq ##保存退出

chmod u-w sudoers ##取消修改权限

启动

./redis-server redis.conf

进入命令交互模式,两种:

1: ./redis-cli

2: telnet 127.0.0.1 6379 (ip接端口)

=============================================================

配置文件参数说明:

1. Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程

daemonize no

2. 当Redis以守护进程方式运行时,Redis默认会把pid写入/var/run/redis.pid文件,可以通过pidfile指定

pidfile /var/run/redis.pid

3. 指定Redis监听端口,默认端口为6379,作者在自己的一篇博文中解释了为什么选用6379作为默认端口,因为6379在手机按键上MERZ对应的号码,而MERZ取自意大利歌女Alessia Merz的名字

port 6379

4. 绑定的主机地址

bind 127.0.0.1

5.当 客户端闲置多长时间后关闭连接,如果指定为0,表示关闭该功能

timeout 300

6. 指定日志记录级别,Redis总共支持四个级别:debug、verbose、notice、warning,默认为verbose

loglevel verbose

7. 日志记录方式,默认为标准输出,如果配置Redis为守护进程方式运行,而这里又配置为日志记录方式为标准输出,则日志将会发送给/dev/null

logfile stdout

8. 设置数据库的数量,默认数据库为0,可以使用SELECT <dbid>命令在连接上指定数据库id

databases 16

9. 指定在多长时间内,有多少次更新操作,就将数据同步到数据文件,可以多个条件配合

save <seconds> <changes>

Redis默认配置文件中提供了三个条件:

save 900 1

save 300 10

save 60 10000

分别表示900秒(15分钟)内有1个更改,300秒(5分钟)内有10个更改以及60秒内有10000个更改。

10. 指定存储至本地数据库时是否压缩数据,默认为yes,Redis采用LZF压缩,如果为了节省CPU时间,可以关闭该选项,但会导致数据库文件变的巨大

rdbcompression yes

11. 指定本地数据库文件名,默认值为dump.rdb

dbfilename dump.rdb

12. 指定本地数据库存放目录

dir ./

13. 设置当本机为slav服务时,设置master服务的IP地址及端口,在Redis启动时,它会自动从master进行数据同步

slaveof <masterip> <masterport>

14. 当master服务设置了密码保护时,slav服务连接master的密码

masterauth <master-password>

15. 设置Redis连接密码,如果配置了连接密码,客户端在连接Redis时需要通过AUTH <password>命令提供密码,默认关闭

requirepass foobared

16. 设置同一时间最大客户端连接数,默认无限制,Redis可以同时打开的客户端连接数为Redis进程可以打开的最大文件描述符数,如果设置 maxclients 0,表示不作限制。当客户端连接数到达限制时,Redis会关闭新的连接并向客户端返回max number of clients reached错误信息

maxclients 128

17. 指定Redis最大内存限制,Redis在启动时会把数据加载到内存中,达到最大内存后,Redis会先尝试清除已到期或即将到期的Key,当此方法处理 后,仍然到达最大内存设置,将无法再进行写入操作,但仍然可以进行读取操作。Redis新的vm机制,会把Key存放内存,Value会存放在swap区

maxmemory <bytes>

18. 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为 redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no

appendonly no

19. 指定更新日志文件名,默认为appendonly.aof

appendfilename appendonly.aof

20. 指定更新日志条件,共有3个可选值:

no:表示等操作系统进行数据缓存同步到磁盘(快)

always:表示每次更新操作后手动调用fsync()将数据写到磁盘(慢,安全)

everysec:表示每秒同步一次(折衷,默认值)

appendfsync everysec

21. 指定是否启用虚拟内存机制,默认值为no,简单的介绍一下,VM机制将数据分页存放,由Redis将访问量较少的页即冷数据swap到磁盘上,访问多的页面由磁盘自动换出到内存中(在后面的文章我会仔细分析Redis的VM机制)

vm-enabled no

22. 虚拟内存文件路径,默认值为/tmp/redis.swap,不可多个Redis实例共享

vm-swap-file /tmp/redis.swap

23. 将所有大于vm-max-memory的数据存入虚拟内存,无论vm-max-memory设置多小,所有索引数据都是内存存储的(Redis的索引数据 就是keys),也就是说,当vm-max-memory设置为0的时候,其实是所有value都存在于磁盘。默认值为0

vm-max-memory 0

24. Redis swap文件分成了很多的page,一个对象可以保存在多个page上面,但一个page上不能被多个对象共享,vm-page-size是要根据存储的 数据大小来设定的,作者建议如果存储很多小对象,page大小最好设置为32或者64bytes;如果存储很大大对象,则可以使用更大的page,如果不 确定,就使用默认值

vm-page-size 32

25. 设置swap文件中的page数量,由于页表(一种表示页面空闲或使用的bitmap)是在放在内存中的,,在磁盘上每8个pages将消耗1byte的内存。

vm-pages 134217728

26. 设置访问swap文件的线程数,最好不要超过机器的核数,如果设置为0,那么所有对swap文件的操作都是串行的,可能会造成比较长时间的延迟。默认值为4

vm-max-threads 4

27. 设置在向客户端应答时,是否把较小的包合并为一个包发送,默认为开启

glueoutputbuf yes

28. 指定在超过一定的数量或者最大的元素超过某一临界值时,采用一种特殊的哈希算法

hash-max-zipmap-entries 64

hash-max-zipmap-value 512

29. 指定是否激活重置哈希,默认为开启(后面在介绍Redis的哈希算法时具体介绍)

activerehashing yes

30. 指定包含其它的配置文件,可以在同一主机上多个Redis实例之间使用同一份配置文件,而同时各个实例又拥有自己的特定配置文件

include /path/to/local.conf

九、客户端也可以使用telnet形式连接。

[root@dbcache conf]# telnet 127.0.0.1 6379

Trying 127.0.0.1...

Connected to dbcache (127.0.0.1).

Escape character is '^]'.

set foo 3

bar

+OK

get foo

$3

bar

^]

telnet> quit

Connection closed.

六、启动服务并验证

启动服务器

./redis-server



$redis-server /etc/redis.conf

查看是否成功启动

$ ps -ef | grep redis



./redis-cli ping

PONG

七、启动命令行客户端赋值取值

redis-cli set mykey somevalue

./redis-cli get mykey

八、关闭服务

$ redis-cli shutdown

#关闭指定端口的redis-server

$redis-cli -p 6380 shutdown


七、集群配置

把鸡蛋都放在一个篮子里是件危险的事情。首先,要做好主备。其次,如果可以做一致性哈希,可以起到负载均衡的作用。




配置Master-Slave,只需要在Slave上配置Master节点IP Port:

这里的Master IP 为192.168.133.139 端口位6379,配置redis.conf:

slaveof 192.168.133.139 6379

PS:为了两个Redis Server可以互访,需要注释掉bind 127.0.0.1

依次启动Master,Slave:

Master

[7651] 17 Aug 19:08:07 * Server started, Redis version 2.4.16

[7651] 17 Aug 19:08:07 * DB loaded from disk: 0 seconds

[7651] 17 Aug 19:08:07 * The server is now ready to accept connections on port 6379

[7651] 17 Aug 19:08:08 * Slave ask for synchronization

[7651] 17 Aug 19:08:08 * Starting BGSAVE for SYNC

[7651] 17 Aug 19:08:08 * Background saving started by pid 7652

[7652] 17 Aug 19:08:08 * DB saved on disk

[7651] 17 Aug 19:08:08 * Background saving terminated with success

[7651] 17 Aug 19:08:08 * Synchronization with slave succeeded

Slave

[7572] 17 Aug 19:07:39 * Server started, Redis version 2.4.16

[7572] 17 Aug 19:07:39 * DB loaded from disk: 0 seconds

[7572] 17 Aug 19:07:39 * The server is now ready to accept connections on port 6379

[7572] 17 Aug 19:07:39 * Connecting to MASTER...

[7572] 17 Aug 19:08:08 * MASTER <-> SLAVE sync started: SYNC sent

[7572] 17 Aug 19:08:08 * MASTER <-> SLAVE sync: receiving 10 bytes from master

[7572] 17 Aug 19:08:08 * MASTER <-> SLAVE sync: Loading DB in memory

[7572] 17 Aug 19:08:08 * MASTER <-> SLAVE sync: Finished with success

看到上述日志,就说明Master-Slave已经连通。

简单测试,Master写,Slave读:

Master写

telnet 192.168.133.139 6379

Trying 192.168.133.139...

Connected to 192.168.133.139.

Escape character is '^]'.

set name snowolf

+OK

Slave读

telnet 192.168.133.140 6379

Trying 192.168.133.140...

Connected to 192.168.133.140.

Escape character is '^]'.

get name

$7

snowolf


搞定!


八、主从备份

在从服务器上执行下列命令:

Shell代码



#备份

redis-cli save

#关闭redis服务器

redis-cli shutdown

然后,拷贝数据目录下的rdb文件。


九、系统服务

习惯了通过service启动一切服务,当然,这跟我生产环境部署有关,通常只分配给用于部署的账户操作service命令的权限。主要是为了确保系统安全。

参考之前写的Memcached的系统服务文件,改造一个Redis版本!


新建文件,并赋予权限:

Shell代码



touch /etc/init.d/redis-server

chmod +x /etc/init.d/redis-server

编辑/etc/init.d/redis-server,键入如下内容:

Shell代码



#!/bin/bash

#

# redis Startup script for redis processes

#

# author: snowolf

#

# processname: redis

redis_path="/usr/local/bin/redis-server"

redis_conf="/etc/redis/redis.conf"

redis_pid="/var/run/redis.pid"

# Source function library.

. /etc/rc.d/init.d/functions

[ -x $redis_path ] || exit 0

RETVAL=0

prog="redis"

# Start daemons.

start() {

if [ -e $redis_pid -a ! -z $redis_pid ];then

echo $prog" already running...."

exit 1

fi

echo -n $"Starting $prog "

# Single instance for all caches

$redis_path $redis_conf

RETVAL=$?

[ $RETVAL -eq 0 ] && {

touch /var/lock/subsys/$prog

success $"$prog"

}

echo

return $RETVAL

}

# Stop daemons.

stop() {

echo -n $"Stopping $prog "

killproc -d 10 $redis_path

echo

[ $RETVAL = 0 ] && rm -f $redis_pid /var/lock/subsys/$prog

RETVAL=$?

return $RETVAL

}

# See how we were called.

case "$1" in

start)

start

;;

stop)

stop

;;

status)

status $prog

RETVAL=$?

;;

restart)

stop

start

;;

condrestart)

if test "x`pidof redis`" != x; then

stop

start

fi

;;

*)

echo $"Usage: $0 {start|stop|status|restart|condrestart}"

exit 1

esac

exit $RETVAL

引用

# service redis-server restart

Stopping redis [失败]

Starting redis [确定]

# service redis-server status

redis (pid 14965) 正在运行...

非常方便!


暂且整理这么多,未完待续。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: