Test our WEDOS Cloud

[gtranslate]

We want to run the best VPS. We have top-of-the-line HP Moonshot server cabinets, which are unrivalled in their category, and in the future the facilities of two interconnected datacentres where services can run simultaneously. But what we need is your help with testing. We want to hear your views, comments and ideas. Let’s make the best VPS on the market together!

Want to test the fastest VPS? Want to test WEDOS Cloud?

If you want to be one of the first to test our WEDOS Cloud and VPS SSD 2.0, let us know at
sales@wedos.com. Write us a contact information and what tests you plan to perform on VPS. We are interested in the greatest possible diversity (websites, benchmarks, CMS, team speak, game server, anything…). If we like your project/test, we will send you the login and password to your test accounts. The estimated testing period is 14 days. We’ll extend it if we need to.

You can test anything legal (see our VPS SSD terms and general terms) Test, load, overload and benchmark. There are 2 chassis at your disposal, with 696 threads of 3.8 GHz CPU, 2.7TB RAM, 28TB SSD.

You can share your results here in the discussion, or you can contact us via private messages or write to us on our contact form.

Of course if you have a blog you can publish the results publicly. We are not afraid 😉 The first results already show clearly that we have extremely powerful processors and disk operations are “breaking records”.

PS: It is important to keep in mind that this is a service designed for testing purposes. We have a few experimental things that are not used anywhere else, so the service is definitely not intended for production use yet.

On which servers does the new service work?

Real photo from our datacenter:

This is the HPE Moonshot server. In our case it is a set of 45 servers, each equipped with Intel® Xeon® E3-1284L v4 processor and Intel® IrisTM Pro P6300 graphics, 32GB of ECC protected memory, 2 x 10Gb Ethernet along with one M.2 flash storage module.

Yes, each server has 2 x 10 Gbps connectivity. There are 45 of these servers with fast SSDs in the entire box.The important thing is that each server runs 2 x 10Gbps The servers will be cooled in an oil bath. Look at what a bath like this looks like here. It’s an authentic video again.
with Intel® IrisTM Pro P6300 graphics, 32GB of ECC protected memory, 10Gb Ethernet along with one M.2 flash storage moduleImportantly, each runs at 2 x 10Gbps

First feedback:

Testing of the new WEDOS Cloud continues in full swing. We have the first results here and we are still fine-tuning a lot of things. We’re still working on everything.


We currently have 171,000 iops per second for reads and almost 11,000 for writes on the test VPS. The commands used are the same as in another test: https://dzone. com/…/iops-benchmarking-disk-io-aws-vs-digita… , so we can compare it with the world.

We read almost 2x faster than the fastest company listed and 42x faster than the second. Our entries are slightly faster than the test winner, or 2.5 times faster than the second-ranked company.

We’ll see what business conditions we’ll deploy and how it will be technically, but today we already know that the choice of HPE Moonshot servers was the right one. Each server has a powerful high frequency CPU, each server has 2 x 10Gbp connectivity and everything is 100% on SSDs (M.2 NVMe SSD).

Our results – random reading:

fio -randrepeat=1 -ioengine=libaio -direct=1 -gtod_reduce=1 -name=test -filename=test -bs=4k -iodepth=64 -size=4G -readwrite=randread

h=64 -size=4G -readwrite=randread
test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): [r(1)] [71 .4% done] [667 .8MB/0KB/0KB /s] [171K/0/0 iops] [eta 00m:02s]
Jobs: 1 (f=1): [r(1)] [100 .0% done] [664 .9MB/0KB/0KB /s] [170K/0/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2232: Wed Apr 26 12:19:15 2017
read : io=4096.0MB, bw=662398KB/s, iops=165599, runt= 6332msec
cpu : usr=14.97%, sys=51.68%, ctx=23969, majf=0, minf=74
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=4096.0MB, aggrb=662397KB/s, minb=662397KB/s, maxb=662397KB/s, mint=6332msec, maxt=6332msec
Disk stats (read/write):
vda: ios=1018220/0, merge=0/0, ticks=297536/0, in_queue=297404, util=98.43%


Random entries:

fio -randrepeat=1 -ioengine=libaio -direct=1 -gtod_reduce=1 -name=test -filename=test -bs=4k -iodepth=64 -size=4G -readwrite=randwrite
h=64 -size=4G -readwrite=randwrite
test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): [w(1)] [100 .0% done] [0KB/42556KB/0KB /s] [0/10 .7K/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2178: Wed Apr 26 12:22:46 2017
write: io=4096.0MB, bw=53627KB/s, iops=13406, runt= 78213msec
cpu : usr=2.21%, sys=8.20%, ctx=1049513, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=0/w=1048576/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=53626KB/s, minb=53626KB/s, maxb=53626KB/s, mint=78213msec, maxt=78213msec
Disk stats (read/write):
vda: ios=0/1048311, merge=0/15, ticks=0/5001708, in_queue=5001932, util=99.93%