Jump to content

Apache Traffic Server

From Wikitech

Apache Traffic Server, aka ATS, is a caching HTTP proxy used as the backend (on-disk) component of Wikimedia's CDN. In-memory, ephemeral caching is done by cache frontends running Varnish.

Architecture

There are three distinct processes in Traffic Server:

traffic_server
The process responsible for dealing with user traffic: accepting connections, processing requests, serving documents from cache or the origin server. traffic_server is a event-driven multi-threaded process. Threads are used to take advantage of multiple CPUs, not to handle multiple connections concurrently (eg: by spawning a thread per connection, or by using a thread pool). Instead, an event system is used in order to schedule work on threads. ATS uses a state machine (compare with the Varnish FSM) to handle each transaction (single HTTP request from a client and the response Traffic Server sends to that client) and provides a system of hooks where plugins (eg: lua) can step in and do things. Specific timers are used at the various states.
traffic_manager
Responsible for launching, monitoring and configuring traffic_server, handling the statistics interface, cluster administration and virtual IP failover.
traffic_cop
A watchdog program monitoring the health of both traffic_manager and traffic_server. This has traditionally been the command to use in order to start ATS. In a systemd world, it can probably be avoided, and traffic_manager can be used as the program to be executed in order to start the unit.

Terminology

ATS uses the term transaction with a different meaning depending on the protocol. For HTTP, a transaction is a HTTP request. In the case of HTTP2, a transaction is a HTTP2 stream. When the term connection is used, that means TCP connections regardless of the context. See the relevant ATS documentation about Client Sessions and Transactions.

Configuration

The changes to the default configuration required to get a caching proxy are:

# /etc/trafficserver/remap.config
map client_url origin_server_url

The following rules map grafana and phabricator to their respective backends and define a catchall for requests that don't match either of the first two rules:

# /etc/trafficserver/remap.config
map http://grafana.wikimedia.org/ http://krypton.eqiad.wmnet/
map http://phabricator.wikimedia.org/ http://iridium.eqiad.wmnet/
map / http://deployment-mediawiki05.deployment-prep.eqiad1.wikimedia.cloud/
# /etc/trafficserver/records.config

CONFIG proxy.config.http.server_ports STRING 3128 3128:ipv6

CONFIG proxy.config.admin.user_id STRING trafficserver
CONFIG proxy.config.http.cache.required_headers INT 1
CONFIG proxy.config.url_remap.pristine_host_hdr INT 1
CONFIG proxy.config.disable_configuration_modification INT 1

If proxy.config.http.cache.required_headers is set to 2, which is the default, the origin server is required to set an explicit lifetime, from either Expires or Cache-Control: max-age. By setting required_headers to 1, objects with Last-Modified are considered for caching too. Setting the value to 0 means that no headers are required to make documents cachable.

Load balancing

In order to load balance requests among origin servers, parent_proxy_routing needs to be enabled in records.config:

# records.config
CONFIG proxy.config.http.parent_proxy_routing_enable INT 1
CONFIG proxy.config.diags.debug.enabled INT 1
CONFIG proxy.config.diags.debug.tags STRING parent_select

A remap rule needs to be configured for the site:

# remap.config
map http://en.wikipedia.org https://enwiki.org

Finally load balancing can be configured by specifying the nodes and the load balancing policy in parent.config:

# parent.config
dest_domain=enwiki.org parent="mw1261.eqiad.wmnet:443,mw1262.eqiad.wmnet:443" parent_is_proxy=false round_robin=strict

Logging

Diagnostic output can be sent to standard output and error instead of the default logfiles, which is a good idea in order to take advantage of systemd's journal.

Cache inspector

To enable the cache inspector functionality, add the following remap rules:

map /cache-internal/ http://{cache-internal}
map /cache/ http://{cache}
map /stat/ http://{stat}
map /test/ http://{test}
map /hostdb/ http://{hostdb}
map /net/ http://{net}
map /http/ http://{http}

Additional ATS instances

Traffic server provides a poorly documented feature called layouts. The ATS layout defines the following paths:

  • exec_prefix (TS_BUILD_EXEC_PREFIX)
  • bindir (TS_BUILD_BINDIR)
  • sbindir (TS_BUILD_SBINDIR)
  • sysconfdir (TS_BUILD_SYSCONFDIR)
  • datadir (TS_BUILD_DATADIR)
  • includedir (TS_BUILD_INCLUDEDIR)
  • libdir (TS_BUILD_LIBDIR)
  • libexecdir (TS_BUILD_LIBEXECDIR)
  • localstatedir (TS_BUILD_LOCALSTATEDIR)
  • runtimedir (TS_BUILD_RUNTIMEDIR)
  • logdir (TS_BUILD_LOGDIR)
  • mandir (TS_BUILD_MANDIR)
  • infodir (TS_BUILD_INFODIR
  • cachedir (TS_BUILD_CACHEDIR)

Those paths are defined at building time by their corresponding TS_BUILD_ constants. However those can be replaced in runtime by using a layout/runroot file. A layout file is a YAML file that defines the paths listed above. An example layout can be found in the upstream documentation.

After defining the layout file, the runroot can be initialized by running traffic_layout:

$ traffic_layout --init --layout="custom.yml" --copy-style=soft

Take into account that the custom layout defines its own bin and sbin directories, so it needs to copy the binaries inside the runroot. Fortunately the flag --copy-style allows to control how the executables are being copied:

  • copy: Full copy
  • hard: Use hard links
  • soft: Use symlinks

Our goal here is to run several instances of the same ATS version, so --copy-style=soft allows to do that and still benefit from system-wide ATS upgrades.

After the layout has been initialized any traffic server CLI tool can use it by adding the option --run-root or setting the TS_RUNROOT environment variable:

$ traffic_ctl --run-root="custom.yaml" reload
$ TS_RUNROOT="custom.yaml" traffic_ctl reload

Debugging

The XDebug plugin allows clients to check various aspects of ATS operation.

To enable the plugin, add xdebug.so to plugin.config, add the following lines to records.config, and restart trafficserver.

CONFIG proxy.config.diags.debug.enabled INT 1
CONFIG proxy.config.diags.debug.tags STRING xdebugs.tag

Once the plugin is enabled, clients can specify various values in a request header and receive the relevant information back. The header is X-Debug by default but is set to X-ATS-Debug at Wikimedia.

For example:

# cache hit
$ curl -H "X-ATS-Debug: X-Milestones" http://localhost 2>&1 | grep Milestones:
< X-Milestones: PLUGIN-TOTAL=0.000022445, PLUGIN-ACTIVE=0.000022445, CACHE-OPEN-READ-END=0.000078570, CACHE-OPEN-READ-BEGIN=0.000078570, UA-BEGIN-WRITE=0.000199094, UA-READ-HEADER-DONE=0.000000000, UA-FIRST-READ=0.000000000, UA-BEGIN=0.000000000

# cache miss
< X-Milestones: PLUGIN-TOTAL=0.000017432, PLUGIN-ACTIVE=0.000017432, DNS-LOOKUP-END=0.091413811, DNS-LOOKUP-BEGIN=0.000148548, CACHE-OPEN-WRITE-END=0.091413811, CACHE-OPEN-WRITE-BEGIN=0.091413811, CACHE-OPEN-READ-END=0.000056997, CACHE-OPEN-READ-BEGIN=0.000056997, SERVER-READ-HEADER-DONE=0.218755336, SERVER-FIRST-READ=0.218755336, SERVER-BEGIN-WRITE=0.091413811, SERVER-CONNECT-END=0.091413811, SERVER-CONNECT=0.091413811, SERVER-FIRST-CONNECT=0.091413811, UA-BEGIN-WRITE=0.218755336, UA-READ-HEADER-DONE=0.000000000, UA-FIRST-READ=0.000000000, UA-BEGIN=0.000000000

The full list of debugging headers is available in the XDebug Plugin documentation.

In the setup at WMF, the plugin can be enabled by setting profile::trafficserver::backend::enable_xdebug to true in hiera. It can then be used by specifying the X-ATS-Debug request header. For example, to dump all client/intermediary/origin request/response headers:

$ curl -H "X-ATS-Debug: log-headers" http://localhost

Process count

For the Apache pool, the client is Varnish. Varnish typically times out and disconnects after 60 seconds, then it begins serving HTTP 503 responses. However, when Varnish disconnects, the PHP process is not destroyed (ignore_user_abort=true). This helps to maintain database consistency, but the tradeoff is that the apache process pool can become very large, and often requires manual intervention to reset it back to a reasonable size.

An apache process pool overload can easily be detected by looking at the total process count in ganglia.

Regardless of the root cause, an apache process pool overload should be dealt with by regularly restarting the apache processes using /home/wikipedia/bin/apache-restart-all. In an overload situation, the bulk of the process pool is taken up with long-running requests, so restarting kills more long-running requests than short requests. Regular restarting of apache allows parts of the site which are still fast to continue working.

Regular restarting is somewhat detrimental to database consistency, but the effects of this are relatively minor compared to the site being completely down.

There are two possible reasons for an apache process pool overload:

  • Some resource on the apache server itself has been exhausted, usually CPU.
  • Apache is acting as a client for some backend, and that backend is failing in a slow way.

CPU usage

If CPU usage on most servers is above 90%, and CPU usage has plateaued (i.e. it has stopped bouncing up and down due to random variations in demand), then you can assume that the problem is an apache CPU overload. Otherwise, the problem is with one of the many remote services that MediaWiki depends on.

CPU profiling can be useful to identify the causes of CPU usage, in cases where the relevant profiling section terminates successfully, instead of ending with a timeout or other fatal error. Run /home/wikipedia/bin/clear-profile to reset the counters.

Note that recursive functions such as PPFrame_DOM::expand() are counted multiple times, roughly as many times as the average stack depth, so the numbers for those functions need to be interpreted with caution. Parser::parse() is typically non-recursive, and gives an upper limit for the CPU usage of recursive parser functions.

In cases of severe overload, or other cases where profiling is not useful, it is possible to identify the source of high CPU usage by randomly attaching to apache processes.

All our apache servers should have PHP debug symbols installed. Our custom PHP packages have stripping disabled. So just log in to a random apache, and run top. Pick the first process that seems to be using CPU, and run gdb -p PID to attach to it. Then run bt to get a backtrace. Here's the bottom of a typical backtrace:

#16 0x00007fa1230e85da in php_execute_script (primary_file=0x7fff8387eb10)
    at /tmp/buildd/php5-5.2.4/main/main.c:2003
#17 0x00007fa1231b19e4 in php_handler (r=0x13dd838)
    at /tmp/buildd/php5-5.2.4/sapi/apache2handler/sapi_apache2.c:650
#18 0x0000000000437d9a in ap_run_handler ()
#19 0x000000000043b1bc in ap_invoke_handler ()
#20 0x00000000004478ce in ap_process_request ()
#21 0x0000000000444cc8 in ?? ()
#22 0x000000000043eef2 in ap_run_process_connection ()
#23 0x000000000044b6c5 in ?? ()
#24 0x000000000044b975 in ?? ()
#25 0x000000000044c208 in ap_mpm_run ()
#26 0x0000000000425a44 in main ()

The "r" parameter to php_handler has the URL in it, which is extremely useful information. So switch to the relevant frame and print it out:

(gdb) frame 17
#17 0x00007fa1231b19e4 in php_handler (r=0x13dd838)
    at /tmp/buildd/php5-5.2.4/sapi/apache2handler/sapi_apache2.c:650
650	/tmp/buildd/php5-5.2.4/sapi/apache2handler/sapi_apache2.c: No such file or directory.
	in /tmp/buildd/php5-5.2.4/sapi/apache2handler/sapi_apache2.c
(gdb) print r->hostname
$2 = 0x13df198 "ko.wikipedia.org"
(gdb) print r->unparsed_uri
$3 = 0x13dee48 "/wiki/%ED%8A%B9%EC%88%98%EA%B8%B0%EB%8A%A5:%EA%B8%B0%EC%97%AC/%EB%A7%98%EB%A7%88"

At the other end of the stack, there is information about what is going on in PHP. See GDB with PHP for some information about using it.

An extension to this idea of profiling by randomly attaching to processes in gdb is Domas's Poor Man's Profiler.


Request logs

Non-purge request logs can be inspected by running atslog-backend, a wrapper around fifo-log-tailer:

# atslog-backend

Call fifo-log-tailer directly to inspect PURGE traffic:

# fifo-log-tailer -socket /var/run/trafficserver/notpurge.pipe

Testing patches in labs

The traffic project in labs provides a testbed for ATS related patches, both in terms of Lua/configuration and ATS itself. One of the traffic instances with hostname beginning in traffic-cache- can be used for this purpose. The traffic labs project also features a self-hosted puppetmaster on which puppet patches can be cherry-picked. At the time of this writing the instance is traffic-cache-bullseye.traffic.eqiad1.wikimedia.cloud, but double-check that this is still the case now that you are reading.

ema@traffic-cache-atstext-buster:~$ grep ^server /etc/puppet/puppet.conf
server = traffic-puppetmaster-buster.traffic.eqiad1.wikimedia.cloud

The labs testbed provides a different, significantly simplified remap configuration compared to production. Right now it looks like this on cache_text instances:

ema@traffic-cache-bullseye:~$ sudo cat /etc/trafficserver/remap.config 
# https://docs.trafficserver.apache.org/en/latest/admin-guide/files/remap.config.en.html
# This file is managed by Puppet.

map http://en.wikipedia.beta.wmflabs.org http://deployment-mediawiki-07.deployment-prep.eqiad1.wikimedia.cloud

Test requests against MediaWiki can be performed as follows

ema@traffic-cache-bullseye:~$ curl -v -H 'Host: en.wikipedia.beta.wmflabs.org' '127.0.0.1:3128/w/load.php?lang=it&modules=startup&only=scripts&raw=1&skin=vector'

Use Host: upload.wikimedia.beta.wmflabs.org on cache_upload instances instead.

Building and running from Git

To build trafficserver from git:

autoreconf -if
./configure --enable-layout=Debian --sysconfdir=/etc/trafficserver --libdir=/usr/lib/trafficserver --libexecdir=/usr/lib/trafficserver/modules
make -j8

Add a minimal /etc/trafficserver/records.config:

CONFIG proxy.config.disable_configuration_modification INT 1
# Replace $PATH_TO_REPO!
CONFIG proxy.config.bin_path STRING ${PATH_TO_REPO}/trafficserver/src/traffic_server/

The newly built traffic_server and traffic_manager binaries can be tested as follows:

sudo -u trafficserver ./src/traffic_server/traffic_server
sudo -u trafficserver ./src/traffic_manager/traffic_manager --nosyslog

Packaging

To package a new stable release, download it from https://trafficserver.apache.org/downloads and check its SHA512 and PGP verification (the links are on the download page). See Apache's documentation on verification.

Then import it into operations/debs/trafficserver with:

PRISTINE_ALL_XDELTA=xdelta gbp import-orig --pristine-tar /tmp/trafficserver-9.1.2.tar.bz2

After this, essentially there are two steps: one is pushing the branches below as they are (verbatim) and the other is pushing the contents of the debian/ directory for review.

First, let's start by upgrading the following branches, pushing them all to the repository:

  • master
  • upstream
  • pristine-tar

Additionally, also push the tags corresponding to the release, such as:

git push origin upstream/9.1.2

Now we need to update the debian/ directory. Make sure that you rebase your changes on top of the release you are trying to build.

  • Make changes to the debian/ directory to update the packaging as and if required.
    • Usually this is only required when upgrading between major version changes, such as 8.x -> 9.x.
  • Submit the changes for review.

Build with:

WIKIMEDIA=yes ARCH=amd64 GBP_PBUILDER_DIST=bullseye DIST=bullseye GIT_PBUILDER_AUTOCONF=no gbp buildpackage -jauto -us -uc -sa --git-builder=git-pbuilder

The procedure to package new RC versions is roughly as follows. This assumes that: (1) the new RC artifacts are made available under https://people.apache.org/~bcall/8.0.3-rc0/, and (2) you want to build the new packages on boron.eqiad.wmnet.

https_proxy=http://url-downloader.wikimedia.org:8080 wget https://people.apache.org/~bcall/8.0.3-rc0/trafficserver-8.0.3-rc0.tar.bz2
# Check that the sha512 matches https://people.apache.org/~bcall/8.0.3-rc0/trafficserver-8.0.3-rc0.tar.bz2.sha512

Then obtain our latest prod packages and update them:

apt-get source trafficserver
cd trafficserver-8.0.2/
uupdate -v 8.0.3~rc0 ../trafficserver-8.0.3-rc0.tar.bz2
cd ../trafficserver-8.0.3~rc0
BACKPORTS=yes WIKIMEDIA=yes ARCH=amd64 DIST=stretch GIT_PBUILDER_AUTOCONF=no git-pbuilder

Running autests

There are a number of "gold tests" shipped with ATS. They're under tests/gold_tests and can be run as follows:

./tests/autest.sh --ats-bin /usr/bin/ --filter redirect

Cheatsheet

Rolling restart in codfw:

sudo cumin -b1 -s30 'A:cp and A:codfw' 'ats-backend-restart'

Show non-default configuration values:

sudo traffic_ctl config diff

Configuration reload:

sudo traffic_ctl config reload

Check if a reload/restart is needed:

sudo traffic_ctl config status

Start in debugging mode, dumping headers

sudo traffic_server -T http_hdrs

Access metrics from the CLI:

traffic_ctl metric get proxy.process.http.cache_hit_fresh

Multiple metrics can be accessed with 'match':

traffic_ctl metric match proxy.process.ssl.*

Get metrics relevant to the TLS instance:

sudo traffic_ctl --run-root=/srv/trafficserver/tls metric match '.*http2.*'

Set the value of a metric to zero:

traffic_ctl metric zero proxy.process.http.completed_requests

Show storage usage:

traffic_server -C check

Wipe storage. This needs to be done while trafficserver isn't running.

traffic_server -C clear_cache

Lua scripting

ATS plugins can be written in Lua. As an example, this is how to choose an origin server dynamically:

# /etc/trafficserver/remap.config
map http://127.0.0.1:3128/ http://$origin_server_ip/ @plugin=/usr/lib/trafficserver/modules/tslua.so @pparam=/var/tmp/ats-set-backend.lua
reverse_map http://$origin_server_ip/ http://127.0.0.1:3128/

Choosing origin server

Selecting the appropriate origin server for a given request can be done using ATS mapping rules. The same goal can be achieved in lua:

-- /var/tmp/ats-set-backend.lua
function do_remap()
    url = ts.client_request.get_url()
    if url:match("/api/rest_v1/") then
        ts.client_request.set_url_host('origin-server.eqiad.wmnet')
        ts.client_request.set_url_port(80)
        ts.client_request.set_url_scheme('http')
        return TS_LUA_REMAP_DID_REMAP
    end
end

Negative response caching

By default ATS caches negative responses such as 404, 503 and others only if the response defines a maxage via the Cache-Control header. This behavior can be changed by setting the configuration option proxy.config.http.negative_caching_enabled, which allows caching of negative responses that do NOT specify Cache-Control. If negative caching is enabled, the lifetime of negative responses without Cache-Control is defined by proxy.config.http.negative_caching_lifetime, in seconds, defaulting to 1800.

One might however desire to cache 404 responses which do not send Cache-Control, without caching any 503 response. Given that proxy.config.http.negative_caching_enabled enables the behavior for a bunch of negative responses, and that ATS versions below 8.0.0 did not allow to specify the list of negative response status codes to cache, the goal can be achieved by setting Cache-Control in lua only for certain status codes:

function read_response()
    local status_code = ts.server_response.get_status()
    local cache_control = ts.server_response.header['Cache-Control']

    -- Cache 404 responses without CC for 10s
    if status_code == 404 and not(cache_control) then
        ts.server_response.header['Cache-Control'] = 'max-age=10'
    end
end

function do_remap()
    ts.hook(TS_LUA_HOOK_READ_RESPONSE_HDR, read_response)
    return 0
end

Starting with ATS 8.0.0, the configuration option proxy-config-http-negative-caching-list allows to specify the list of negative response status codes to cache.

Setting X-Cache-Int

As another example, the following script takes care of setting the X-Cache-Int response header:

-- /var/tmp/ats-set-x-cache-int.lua
function cache_lookup()
     local cache_status = ts.http.get_cache_lookup_status()
     ts.ctx['cstatus'] = cache_status
end

function cache_status_to_string(status)
     if status == TS_LUA_CACHE_LOOKUP_MISS then
        return "miss"
     end

     if status == TS_LUA_CACHE_LOOKUP_HIT_FRESH then
        return "hit"
     end

     if status == TS_LUA_CACHE_LOOKUP_HIT_STALE then
        return "miss"
     end

     if status == TS_LUA_CACHE_LOOKUP_SKIPPED then
        return "pass"
     end

     return "bug"
end

function gen_x_cache_int()
     local hostname = "cp4242" -- from puppet
     local cache_status = cache_status_to_string(ts.ctx['cstatus'])

     local v = ts.client_response.header['X-Cache-Int']
     local mine = hostname .. " " .. cache_status

     if (v) then
        v = v .. ", " .. mine
     else
        v = mine
     end

     ts.client_response.header['X-Cache-Int'] = v
     ts.client_response.header['X-Cache-Status'] = cache_status
end

function do_remap()
     ts.hook(TS_LUA_HOOK_CACHE_LOOKUP_COMPLETE, cache_lookup)
     ts.hook(TS_LUA_HOOK_SEND_RESPONSE_HDR, gen_x_cache_int)
     return 0
end

Custom metrics

Ad-hoc metrics can be created, incremented and accessed in Lua. For example, to keep per-origin counters of origin server requests:

function do_global_send_request()
    local ip = ts.server_request.server_addr.get_ip()

    if ip == "0.0.0.0" then
        -- internal stuff, not an actual origin server request
        return 0
    end

    local counter_name = "origin_requests_" .. ip

    counter = ts.stat_find(counter_name)

    if counter == nil then
    	counter = ts.stat_create(counter_name,
                                 TS_LUA_RECORDDATATYPE_INT,
                                 TS_LUA_STAT_PERSISTENT,
                                 TS_LUA_STAT_SYNC_COUNT)
    end

    counter:increment(1)
end

Forcing a cache miss (similar to ban)

Sometimes it is desirable to ensure that certain cached responses are not returned to clients, and that instead the objects are fetched again from the origin server.

This can be done in Lua by overriding the proxy.config.http.cache.generation setting for a (set of) specific transaction(s). The value passed will be combined with the cache key at cache lookup time, effectively turning one single cache lookup for a certain object into a miss. The object will be fetched again from the origin, and all subsequent cache lookups will hit on the new object.

function do_global_read_request()
    if ts.client_request.header['Host'] == 'it.wikipedia.org' then
        ts.http.config_int_set(TS_LUA_CONFIG_HTTP_CACHE_GENERATION, 1593784707)
    end
end

The example uses the number of seconds since epoch but any integer other than -1 would do. Later on, once it is certain that the old objects have expired, the change can be reverted. Wait at least for the maximum TTL, 24 hours at the time of this writing, before reverting.


Debugging

Debugging output can be produced from Lua with ts.debug("message"). The following configuration needs to be enabled to log debug output:

CONFIG proxy.config.diags.debug.enabled INT 1
CONFIG proxy.config.diags.debug.tags STRING ts_lua

In case other debugging tags need to be enabled, such as for example http_hdrs:

CONFIG proxy.config.diags.debug.tags STRING ts_lua|http_hdrs

See the documentation for more tags.

Unit testing

The busted framework allows to test Lua scripts. It can be installed as follows:

apt install build-essential luarocks
luarocks install busted
luarocks install luacov

The following unit tests cover some of the functionalities implemented by ats-set-x-cache-int.lua:

-- unit_test.lua
_G.ts = { client_response = {  header = {} }, ctx = {} }

describe("Busted unit testing framework", function()
  describe("script for ATS Lua Plugin", function()

    it("test - hook", function()
      stub(ts, "hook")

      require("ats-set-x-cache-int")
      local result = do_remap()
      assert.are.equals(0, result)
    end)

    it("test - gen_x_cache_hit", function()
      stub(ts, "hook")

      require("ats-set-x-cache-int")
      local result = gen_x_cache_int()

      assert.are.equals('miss', ts.client_response.header['X-Cache-Status'])
      assert.are.equals('cp4242 miss', ts.client_response.header['X-Cache-Int'])
    end)

  end)
end)

Run the tests and generate a coverage report with:

# From the root of your puppet repo
$ /bin/sh -c 'busted --verbose --helper=modules/profile/files/trafficserver/mock.helper.lua --lpath=modules/profile/files/trafficserver/?.lua ./modules/profile/files/trafficserver/*.lua'
●●
2 successes / 0 failures / 0 errors / 0 pending : 0.012771 seconds

$ luacov ; cat luacov.report.out

Storage

Information about permanent storage can be obtained by using the python3-superior-cache-analyzer Debian package:

from scan import span
s = span.Span("/dev/nvme0n1p1")
print(s)