My experies with gvirstor

From: Patrick Tracanelli <eksffa_at_freebsdbrasil.com.br>
Date: Sun, 29 Apr 2007 16:43:49 -0300
Here is what I got as my experiences with gvirstor so far.

With kern.geom.virstor.debug=15 get really slow. While accessing
(newfs'ing) /dev/virstor/home system starts to get 98% of CPU cycles. No
problem after all, just mentioning in case this shouldnt happen.

With a 40GB device I cannot create a 2TB virtual gvirstor device. While 
in newfs, in a certain moment I get:

...
...
...
  3542977888, 3543354240, 3543730592, 3544106944, 3544483296, 3544859648,
3545236000, 3545612352, 3545988704, 3546365056,
  3546741408, 3547117760, 3547494112,

and it STOPS. Checking for debug I can find out that BIO Delaying is
working, because I get:

GEOM_VIRSTOR[1]: All physical space allocated for home
GEOM_VIRSTOR[2]: Delaying BIO (size=65536) until free physical space can
be found on virstor/home

If I ./gvirstor add home ad4s1, things start to work back. But ad4s1 is
way too small and Is not enough. But at least a 1G device I can create:

/dev/virstor/home    946G    4.0K    870G     0%    /usr/home4

So my question, how can I make the math to find out how much real space 
I will need to create a gvirstor device sized N?

# ./gvirstor status home
         Name             Status  Components
virstor/home  43% physical free  ad2s1

Since it is a 40GB devie, something close to 34GB was used to store
structure of a 1TB device. Is this usage related to the chunk size?

Anyway I still adding and removing disks from the gvirstor

         Name             Status  Components
virstor/home  91% physical free  ad2s1
                                  ad0s2d
                                  ggate0

I am also importing ggate devices into gvirstor wihtout problems. It is 
amazing that it works since it gives us teorically ilimited storage 
space. In a single machine we are limited at number of disks. But with 
multiple machines importing ggate exports, it seems amazingly.

however, if I export gvirstor device, the other side (ggate client) can
only import it if it is umounted in the local machine (the one where
gvirstor resides):

/dev/ggate0    946G    4.0K    870G     0%    /mnt

If I try to mount it, I get:

# mount /dev/virstor/home /usr/home4
mount: /dev/virstor/home: Operation not permitted

And, if I try to export a mounted gvirstor device, ggated starts fine:

# ggated -v
info: Reading exports file (/etc/gg.exports).
debug: Added 10.69.69.69/32 /dev/md0 RW to exports list.
debug: Added 10.69.69.69/32 /dev/virstor/home RW to exports list.
info: Exporting 2 object(s).
info: Listen on port: 3080.

But on remote side:

# ggatec create -v -o rw 10.69.69.1 /dev/virstor/home
info: Connected to the server: 10.69.69.1:3080.
debug: Sending version packet.
debug: Sending initial packet.
debug: Receiving initial packet.
debug: Received initial packet.
info: Connected to the server: 10.69.69.1:3080.
debug: Sending version packet.
debug: Sending initial packet.
debug: Receiving initial packet.
debug: Received initial packet.
error: ggatec: ioctl(/dev/ggctl): Invalid argument.

And, back on the server side, ggated:

error: Cannot open /dev/virstor/home: Operation not permitted.
debug: Connection removed [10.69.69.69 /dev/virstor/home].
warning: Cannot send initial packet: Bad file descriptor.

Thatīs bad fun :( I thought I could do more lego play. This seems like 
the same problem I had in the past, trying to export a mounted gmirror 
device.

Its probably a ggate limitation.

Anyways, this is again, just to report the experiences I had by now.

Up to now I didnt get panics or unexpected behavior.

With this command:


# dd if=/dev/zero of=/usr/home4/40G.bin count=40000 bs=1024k


While running

iostat -w1 ad0 ad2

I can see there is no performance difference comparing writings to the 
ad provider or to a gvirstor provider. I can also see that the disk 
usage is one provider each time. I only get activiry on ad0 when ad2 has 
ended up its space. gstat shows me the same thing.

However, let me ask something. Is metadata information updated 
synchronously?

I ask it because removing /usr/home4/40G.bin (rm /usr/home4/40G.bin) 
takes about 1 and a half minute to finish (newfs was made with -U flag).

I will do some more testings during the week.
Received on Sun Apr 29 2007 - 18:10:36 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:09 UTC