您的位置:首页 > Web前端

Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Lever

2015-09-18 17:01 633 查看
https://indico.cern.ch/event/304944/session/3/contribution/402/attachments/578582/796733/CHEP2015_Swift_Ceph_v8.pdf

Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph

http://lists.openstack.org/pipermail/openstack/2014-January/004641.html

https://swiftstack.com/blog/2013/04/18/openstack-summit-benchmarking-swift/

http://www.g363.com/s/openstack%20swift%20performance%20analysis_p2.html

>> Hi,

>>

>> This question is specific to Openstack Swift. I am trying to understand

>> just how much is the proxy server a bottleneck when multiple clients are

>> concurrently trying to write to a swift cluster. Has anyone done

>> experiments to measure this? It'll be great to see some results.

>>

>> I see that the proxy-server already has a "workers" config option.

>> However, looks like that is the # of threads in one proxy-server process.

>> Does having multiple proxy-servers themselves, running on different nodes

>> (and having some load-balancer in front of them) help in satisfying more

>> concurrent writes? Or will these multiple proxy-servers also get

>> bottlenecked on the account/container/obj server?

>>

>> Also, looking at the code in swift/proxy-server/controllers/obj.py, it

>> seems that each request that the proxy-server sends to the backend servers

>> (account/container/obj) is synchronous. It does not send the request and go

>> back do accept more requests. Is this one of the reasons why write requests

>> can be slow?

>>

>> Thanks in advance.

##############################################################################################

It's not synchronous, each request/eventlet co-rotine will yield/trampoline

back to the reactor/hub on every socket operation that raises EWOULDBLOCK.

In cases where there's a tight long running read/write loop you'll

normally find a call to eventlet.sleep (or in at least one case a queue) to

avoid starvation.

Tuning workers and concurrency has a lot to do with the hardware and some

with the work-load. The processing rate of an individual proxy server is

mostly cpu bound and depends on if you're doing ssl termination in front of

your proxy. Request rate throughput is easily scaled by adding more proxy

servers (assuming your client doesn't bottleneck, look to

https://github.com/swiftstack/ssbench for a decently scalable Swift

benchmark suite). Throughput is harder to scale wide because of load

balancing - round-robin dns seems to be a good choice, or ssbench has an

option to benchmark a set of storage urls (list of proxy servers).

Have you read:

http://docs.openstack.org/developer/swift/deployment_guide.html#general-service-tuning

> Hi Shrinand,

>

> The concurrency bottleneck of Swift cluster could be various.

> Here's a list :

>

> - Settings of each workers, workers count, max_clients,

> threads_per_disk.

> - Proxy CPU bound

> - Storage nodes CPU bound

> - Total Disk IO capacity (includes available memory for xfs caching)

> - The power of your client machines

> - Network issue

>

>

> You need to analyze the monitoring data to find the real bottleneck.

> The range of concurrency connections performance depends on the

> deployment.

> Concurrent connections from 150(VMs) to 6K+(physical sever pool). Of

> course that you can setup multiple proxy servers for handling higher

> concurrency as long as your storage nodes can stand for it.

>

> The path of a request in my knowing:

>

> Client --> Proxy-server --> object-server --> container-server (optional

> async) --> object-server --> Proxy-server --> Client --> close connection.

>

>

> Hope it help
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: