Manage Pools section first shows when accessing the Web-UI. A pool or a bucket pool is a directory to hold buckets. Each pool corresponds to a single backend S3 server instance. Buckets and access keys are associated to a pool.
The first step is to create a pool. Fill a directory in a full path, select a unix group, then click the create button (a plus icon). The directory needs to be writable to the user:group pair.
List Pools section displays a list of existing pools. It is a slider list. Check the ""backend-state" of the pool just created. It should be ready. A pool is unusable when it is in the inoperable state (often, the reason is the directory is not writable).
Select a pool by clicking the edit button (a pencil icon). It opens Edit a Pool section. Or, delete a pool by clicking the delete button (a trash-can icon).
Edit a Pool section has two independent subsections -- one for buckets and the other for access keys.
A bucket has a bucket-policy that specifies a permission to public access: none, upload, download, or public. A bucket with the none-policy is accessible only with access keys.
An access key has a key-policy: readwrite, readonly, or writeonly. Accesses to buckets are restricted by these policies. An expiration date must be a future. An expiration date is actually a time in second, but the UI only handles it by date at midnight UTC.
The last figure shows a screenshot after some operations. It has one private bucket and two access keys (one readwrite, one readonly).
The S3-endpoint URL can be found in the menu at the top-left corner.
- Pools, buckets, and access keys will be expired in 180 days after its creation (it is site configurable).
- A user will be expired in 180 days, too, from the last access to Registrar. However, an expiration of a pool will come first.
The UI is created with vuejs+vuetify. It is not of your taste, try simple UI. Simple UI reveals interactions with Web-UI. If you are currently accessing the UI by a URL ending with "⋯/ui/index.html", the simple UI is available at "⋯/ui2/index.html".
The following example shows accessing an endpoint using the AWS CLI. An access key pair can be obtained by Lens3 Web-UI. Lens3 only works with the signature algorithm v4, and it is specified as "s3v4".
$ cat ~/.aws/config
[default]
s3 =
signature_version = s3v4
$ cat ~/.aws/credentials
[default]
aws_access_key_id = WoRKvRhrdaMNSlkZcJCB
aws_secret_access_key = DzZv57R8wBIuVZdtAkE1uK1HoebLPMzKM6obA4IDqOhaLIBf
$ aws --endpoint-url=http://lens3.example.com/ s3 ls s3://somebucket1/
First, check the status of a pool as shown in List Pools section. Next, check error messages from an S3 access. However, accesses rejected at Lens3 only return coarse error messages.
-
A pool becomes INOPERABLE, when starting the backend S3 server fails. For diagnosing, the reason button on UI shows the message from the backend. A typical error is that pool's bucket-directory is not writable. Unfortunately, the message might not help much in other error cases.
-
An existence of regular files in pool's bucket-directory may cause a problem. Creating a bucket of the same name fails in the backend.
| Fig. Lens3 overview. |
Lens3 consists of Multiplexers and Registrar -- Multiplexer is a proxy to backend S3 servers, and Registrar manages buckets and access keys through Web-UI. S3-Baby-server is an open-source S3 server. Others are by third-parties. Valkey is an open-source keyval-db database system. A reverse-proxy is not specified in Lens3 but it is required for operation.
Multiplexer forwards access requests to a backend S3 server instance by looking at a bucket name. Multiplexer determines the target backend using an association of a bucket and a pool. This association is stored in the keyval-db.
Multiplexer is also in charge of starting and stopping a backend S3 server instance. Multiplexer starts a backend on receiving an access request, and after a while, Multiplexer stops the instance when it becomes idle. Multiplexer runs a backend as a user process using "sudo".
Registrar provides management of buckets. A pool is a unit of management in Lens3 and it corresponds to a single backend. A user first creates a pool, then registers buckets to the pool.
A pool has a state reflecting the state of a backend as "backend-state".
Pool states are:
- READY and INITIAL indicate a service is usable. It does not necessarily mean a backend is running. READY and INITIAL are synonymous in v2.1. The INITIAL state was used as the state that the backend is not in sync with the Lens3's state.
- DISABLED indicates a pool is unusable. A transition between READY and DISABLED is by actions by an administrator or some expiration conditions. The causes of a transition include disabling a user account, making a pool offline, or an expiry of a pool.
- SUSPENDED indicates a pool is temporarily unusable by server busyness. It needs several minutes for a cease of the condition.
- INOPERABLE indicates an error state and a pool is permanently unusable. Mainly, it means it has failed to start a backend. This pool cannot be used and should be removed.
Deletions of buckets and secrets are accepted during the suspension state of a pool, because they are internal actions in Lens3. In contrast, additions of buckets and secrets are rejected.
Lens3 assumes buckets are only managed by Registrar. It rejects some bucket operations. Specifically, creation and listing requests will fail because they won't be forwarded to a backend. In contrast, it would accept bucket deletion. The S3 delete bucket operation will be forwarded to a backend, and it will succeed.
Lens3 Registrar never deletes buckets in the backend. Lens3 just removes them from the namespace.
On the other hand, a user can delete a bucket via the S3 delete bucket operation. However, the deleted status is not reflected in Lens3. That causes the bucket will be re-created with empty contents at the next start of a backend. Note that Lens3 tries to make the existence of buckets in sync with a backend, at starting a backend.
Bucket names must be in lowercase alphanums and "-" (minus). Especially, they can't include "." (dot) and "_" (underscore). Lens3 rejects names with all numerals. Lens3 also rejects names "aws", "amazon", "minio" and the names that begin with "goog" and "g00g".
Lens3 does not provide control on properties of files and buckets. Buckets can only have a public access policy.
Running S3-Baby-server may create a file "."+object+"@meta" for each object. It holds metadata for an object such as a checksum and attached tags.
Lens3 does not provide access logs to users. Administrators might provide access logs to users by request by filtering server logs.
Lens3 returns a response in json not XML, when a request is handled in Lens3.
- Lens3 has no elaborate access policies.
- Lens3 has no event notifications.
- Lens3 does not support listing of buckets by
aws s3 ls. Simply, Lens3 prohibits accesses to the bucket namespace ("/"). It is because the bucket namespace is shared by all users. - Lens3 does not support presigned URL. Lens3 does not recognize a credential parameter in a URL.
- Lens3 does not provide accesses to the rich UI provided by a backend server.
- Lens3 only keeps track of a single Registrar session (due to CSRF countermeasure). Accesses from multiple browsers are rejected.
- pool: A bucket pool is a management unit of S3 buckets. It corresponds to a single backend.
- backend: A backend refers to a backend S3 server instance. It is a process of S3-Baby-server.
- probe access: Registrar or the administrator tool accesses Multiplexer to start a backend instance. Such access is called a probe access. A probe access is processed at Multiplexer and is not forwarded to a backend.
- The backend is changed to S3-Baby-server. The version of Minio got not to work with recent AWS-CLI. (It is probably due to handling of chunked streams). Note that Lens3 is only tested with S3-Baby-server from version 2.2.
- v2.1 is a code refresh.
- Users of the service are default-allow (configurable). Prior registering of users is optional.
- It has a choice of a backend, rclone in addition to MinIO. Note that the current implementation of rclone as rclone-serve-s3 has problems (rclone v1.66.0).
- Checking access keys is done in Lens3. v1.3 passed requests to a backend unchecked.
- Records in the keyval-db are not compatible to v1.3. All records are now in json. The keyval-db is changed to Valkey.
- Commands to add/delete buckets in backends are invoked by Multiplexer. It means it reverted to v1.1.
- MinIO version is fixed to use the legacy "fs"-mode. It requires a quite old version of MinIO. In the recent development, MinIO introduced erasure-coding and it uses chunked files in its storage. Chunked files are not suitable for exporting existing files.
- Host-style naming of buckets is dropped.
- Rich features are dropped.
- Accesses are forwarded to MinIO with respect to a pair of a bucket name and an access key. Forwarding decision was only by an access key in v1.1. This change prohibits performing S3's bucket operations, because bucket operations are not forwarded.
- Bucket name space is shared by all users. Bucket names must be unique.
- Access keys now have expiration.
- Locks in accessing the keyval-db are omitted. Locks are avoided preferring uses of atomics. Operations between Registrar and the administrator tool are sloppy.
- Commands of MinIO's MC are directly invoked from Registrar. MC commands were invoked at Multiplexer in v1.1.



