Contributed by Steve Loughran.
The S3Guard absence warning of HADOOP-16484 has been changed
so that by default the S3A connector only logs at debug
when the connection to the S3 Store does not have S3Guard
enabled.
The option to control this log level is now
fs.s3a.s3guard.disabled.warn.level
and can be one of: silent, inform, warn, fail.
On a failure, an ExitException is raised with exit code 49.
For details on this safety feature, consult the s3guard documentation.
Change-Id: If868671c9260977c2b03b3e475b9c9531c98ce79
Contributed by Steve Loughran.
This is a successor to HADOOP-16346, which enabled the S3A connector
to load the native openssl SSL libraries for better HTTPS performance.
That patch required wildfly.jar to be on the classpath. This
update:
* Makes wildfly.jar optional except in the special case that
"fs.s3a.ssl.channel.mode" is set to "openssl"
* Retains the declaration of wildfly.jar as a compile-time
dependency in the hadoop-aws POM. This means that unless
explicitly excluded, applications importing that published
maven artifact will, transitively, add the specified
wildfly JAR into their classpath for compilation/testing/
distribution.
This is done for packaging and to offer that optional
speedup. It is not mandatory: applications importing
the hadoop-aws POM can exclude it if they choose.
Change-Id: I7ed3e5948d1e10ce21276b3508871709347e113d
Contributed by Mukund Thakur.
If you set the log org.apache.hadoop.fs.s3a.impl.NetworkBinding
to DEBUG, then when the S3A bucket probe is made -the DNS address
of the S3 endpoint is calculated and printed.
This is useful to see if a large set of processes are all using
the same IP address from the pool of load balancers to which AWS
directs clients when an AWS S3 endpoint is resolved.
This can have implications for performance: if all clients
access the same load balancer performance may be suboptimal.
Note: if bucket probes are disabled, fs.s3a.bucket.probe = 0,
the DNS logging does not take place.
Change-Id: I21b3ac429dc0b543f03e357fdeb94c2d2a328dd8
Contributed by Mukund Thakur
Optimize S3AFileSystem.listLocatedStatus() to perform list
operations directly and then fallback to head checks for files
Change-Id: Ia2c0fa6fcc5967c49b914b92f41135d07dab0464
Contributed by Steve Loughran.
This strips out all the -p preservation options which have already been
processed when uploading a file before deciding whether or not to query
the far end for the status of the (existing/uploaded) file to see if any
other attributes need changing.
This will avoid 404 caching-related issues in S3, wherein a newly created
file can have a 404 entry in the S3 load balancer's cache from the
probes for the file's existence prior to the upload.
It partially addresses a regression caused by HADOOP-8143,
"Change distcp to have -pb on by default" that causes a resurfacing
of HADOOP-13145, "In DistCp, prevent unnecessary getFileStatus call when
not preserving metadata"
Change-Id: Ibc25d19e92548e6165eb8397157ebf89446333f7
add unit test, new ITest and then fix the issue: different schema, bucket == skip
factored out the underlying logic for unit testing; also moved
maybeAddTrailingSlash to S3AUtils (while retaining/forwarnding existing method
in S3AFS).
tested: london, sole failure is
testListingDelete[auth=true](org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations)
filed HADOOP-16853
Change-Id: I4b8d0024469551eda0ec70b4968cba4abed405ed
Contributed by Bilahari T H.
The page limit is set in "fs.azure.list.max.results"; default value is 500.
There's currently a limit of 5000 in the store -there are no range checks
in the client code so that limit can be changed on the server without
any need to update the abfs connector.
Contributed by Ben Roling.
ETag values are unpredictable with some S3 encryption algorithms.
Skip ITestS3AMiscOperations tests which make assertions about etags
when default encryption on a bucket is enabled.
When testing with an AWS an account which lacks the privilege
for a call to getBucketEncryption(), we don't skip the tests.
In the event of failure, developers get to expand the
permissions of the account or relax default encryption settings.
Contributed by Steve Loughran
* move qualify logic to S3AFileSystem.makeQualified()
* make S3AFileSystem.qualify() a private redirect to that
* ITestS3GuardFsShell turned off
Contributed by Steve Loughran.
Not all stores do complete validation here; in particular the S3A
Connector does not: checking up the entire directory tree to see if a path matches
is a file significantly slows things down.
This check does take place in S3A mkdirs(), which walks backwards up the list of
parent paths until it finds a directory (success) or a file (failure).
In practice production applications invariably create destination directories
before writing 1+ file into them -restricting check purely to the mkdirs()
call deliver significant speed up while implicitly including the checks.
Change-Id: I2c9df748e92b5655232e7d888d896f1868806eb0
AreContributed by Mukund Thakur.
This addresses an issue which surfaced with KMS encryption: the wrong
KMS key could be picked up in the S3 COPY operation, so
renamed files, while encrypted, would end up with the
bucket default key.
As well as adding tests in the new suite
ITestS3AEncryptionWithDefaultS3Settings,
AbstractSTestS3AHugeFiles has a new test method to
verify that the encryption settings also work
for large files copied via multipart operations.
Contributed by Sergei Poganshev.
Catches Exception instead of IOException in closeStream()
and so handle exceptions such as SdkClientException by
aborting the wrapped stream. This will increase resilience
to failures, as any which occuring during stream closure
will be caught. Furthermore, because the
underlying HTTP connection is aborted, rather than closed,
it will not be recycled to cause problems on subsequent
operations.
This adds a new option fs.s3a.bucket.probe, range (0-2) to
control which probe for a bucket existence to perform on startup.
0: no checks
1: v1 check (as has been performend until now)
2: v2 bucket check, which also incudes a permission check. Default.
When set to 0, bucket existence checks won't be done
during initialization thus making it faster.
When the bucket is not available in S3,
or if fs.s3a.endpoint points to the wrong instance of a private S3 store
consecutive calls like listing, read, write etc. will fail with
an UnknownStoreException.
Contributed by:
* Mukund Thakur (main patch and tests)
* Rajesh Balamohan (v0 list and performance tests)
* lqjacklee (HADOOP-15990/v2 list)
* Steve Loughran (UnknownStoreException support)
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
new file: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/UnknownStoreException.java
new file: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java
modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md
modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
new file: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java
modified: hadoop-tools/hadoop-aws/src/test/resources/core-site.xml
Change-Id: Ic174f803e655af172d81c1274ed92b51bdceb384
Adds a new service code to recognise accounts without HTTP support; catches
that and considers such a responset a successful validation of the ability of the
client to switch to http when the test parameters expect that.
Contributed by Steve Loughran
Contributed by Steve Loughran.
During S3A rename() and delete() calls, the list of objects delete is
built up into batches of a thousand and then POSTed in a single large
DeleteObjects request.
But as the IO capacity allowed on an S3 partition may only be 3500 writes
per second *and* each entry in that POST counts as a single write, then
one of those posts alone can trigger throttling on an already loaded
S3 directory tree. Which can trigger backoff and retry, with the same
thousand entry post, and so recreate the exact same problem.
Fixes
* Page size for delete object requests is set in
fs.s3a.bulk.delete.page.size; the default is 250.
* The property fs.s3a.experimental.aws.s3.throttling (default=true)
can be set to false to disable throttle retry logic in the AWS
client SDK -it is all handled in the S3A client. This
gives more visibility in to when operations are being throttled
* Bulk delete throttling events are logged to the log
org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears
often then choose a smaller page size.
* The metric "store_io_throttled" adds the entire count of delete
requests when a single DeleteObjects request is throttled.
* A new quantile, "store_io_throttle_rate" can track throttling
load over time.
* DynamoDB metastore throttle resilience issues have also been
identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling
flag does not apply to DDB IO precisely because there may still be
lurking issues there and it safest to rely on the DynamoDB client
SDK.
Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84
Contributed by Steve Loughran.
This fixes two problems with S3Guard authoritative mode and
the auth directory flags which are stored in DynamoDB.
1. mkdirs was creating dir markers without the auth bit,
forcing needless scans on newly created directories and
files subsequently added; it was only with the first listStatus call
on that directory that the dir would be marked as authoritative -even
though it would be complete already.
2. listStatus(path) would reset the authoritative status bit of all
child directories even if they were already marked as authoritative.
Issue #2 is possibly the most expensive, as any treewalk using listStatus
(e.g globfiles) would clear the auth bit for all child directories before
listing them. And this would happen every single time...
essentially you weren't getting authoritative directory listings.
For the curious, that the major bug was actually found during testing
-we'd all missed it during reviews.
A lesson there: the better the tests the fewer the bugs.
Maybe also: something obvious and significant can get by code reviews.
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/BulkOperationState.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/LocalMetadataStore.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStore.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/NullMetadataStore.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestRestrictedReadAccess.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestPartialDeleteFailures.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreAuthoritativeMode.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardFsck.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java
Change-Id: Ic3ffda13f2af2430afedd50fd657b595c83e90a7
Contributed by Mustafa Iman.
This adds a new configuration option fs.s3a.connection.request.timeout
to declare the time out on HTTP requests to the AWS service;
0 means no timeout.
Measured in seconds; the usual time suffixes are all supported
Important: this is the maximum duration of any AWS service call,
including upload and copy operations. If non-zero, it must be larger
than the time to upload multi-megabyte blocks to S3 from the client,
and to rename many-GB files. Use with care.
Change-Id: I407745341068b702bf8f401fb96450a9f987c51c
* Enhanced builder + FS spec
* s3a FS to use this to skip HEAD on open
* and to use version/etag when opening the file
works with S3AFileStatus FS and S3ALocatedFileStatus
Introduces `openssl` as an option for `fs.s3a.ssl.channel.mode`.
The new option is documented and marked as experimental.
For details on how to use this, consult the peformance document
in the s3a documentation.
This patch is the successor to HADOOP-16050 "S3A SSL connections
should use OpenSSL" -which was reverted because of
incompatibilities between the wildfly OpenSSL client and the AWS
HTTPS servers (HADOOP-16347). With the Wildfly release moved up
to 1.0.7.Final (HADOOP-16405) everything should now work.
Related issues:
* HADOOP-15669. ABFS: Improve HTTPS Performance
* HADOOP-16050: S3A SSL connections should use OpenSSL
* HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
* HADOOP-16405: Upgrade Wildfly Openssl version to 1.0.7.Final
Contributed by Sahil Takiar
Change-Id: I80a4bc5051519f186b7383b2c1cea140be42444e
Adds one extra test to the ABFS close logic, to explicitly
verify that the close sequence of FilterOutputStream is
not going to fail.
This is just a due-diligence patch, but it helps ensure
that no regressions creep in in future.
Contributed by Steve Loughran.
Change-Id: Ifd33a8c322d32513411405b15f50a1aebcfa6e48
Contains:
HADOOP-16474. S3Guard ProgressiveRenameTracker to mark destination
dirirectory as authoritative on success.
HADOOP-16684. S3guard bucket info to list a bit more about
authoritative paths.
HADOOP-16722. S3GuardTool to support FilterFileSystem.
This patch improves the marking of newly created/import directory
trees in S3Guard DynamoDB tables as authoritative.
Specific changes:
* Renamed directories are marked as authoritative if the entire
operation succeeded (HADOOP-16474).
* When updating parent table entries as part of any table write,
there's no overwriting of their authoritative flag.
s3guard import changes:
* new -verbose flag to print out what is going on.
* The "s3guard import" command lets you declare that a directory tree
is to be marked as authoritative
hadoop s3guard import -authoritative -verbose s3a://bucket/path
When importing a listing and a file is found, the import tool queries
the metastore and only updates the entry if the file is different from
before, where different == new timestamp, etag, or length. S3Guard can get
timestamp differences due to clock skew in PUT operations.
As the recursive list performed by the import command doesn't retrieve the
versionID, the existing entry may in fact be more complete.
When updating an existing due to clock skew the existing version ID
is propagated to the new entry (note: the etags must match; this is needed
to deal with inconsistent listings).
There is a new s3guard command to audit a s3guard bucket/path's
authoritative state:
hadoop s3guard authoritative -check-config s3a://bucket/path
This is primarily for testing/auditing.
The s3guard bucket-info command also provides some more details on the
authoritative state of a store (HADOOP-16684).
Change-Id: I58001341c04f6f3597fcb4fcb1581ccefeb77d91
This hardens the wasb and abfs output streams' resilience to being invoked
in/after close().
wasb:
Explicity raise IOEs on operations invoked after close,
rather than implicitly raise NPEs.
This ensures that invocations which catch and swallow IOEs will perform as
expected.
abfs:
When rethrowing an IOException in the close() call, explicitly wrap it
with a new instance of the same subclass.
This is needed to handle failures in try-with-resources clauses, where
any exception in closed() is added as a suppressed exception to the one
thrown in the try {} clause
*and you cannot attach the same exception to itself*
Contributed by Steve Loughran.
Change-Id: Ic44b494ff5da332b47d6c198ceb67b965d34dd1b
Contributed by Steve Loughran.
This is part of the ongoing refactoring of the S3A codebase, with the
delegation token support (HADOOP-14556) no longer given a direct reference
to the owning S3AFileSystem. Instead it gets a StoreContext and a new
interface, DelegationOperations, to access those operations offered by S3AFS
which are specifically needed by the DT bindings.
The sole operation needed is listAWSPolicyRules(), which is used to allow
S3A FS and the S3Guard metastore to return the AWS policy rules needed to
access their specific services/buckets/tables, allowing the AssumedRole
delegation token to be locked down.
As further restructuring takes place, that interface's implementation
can be moved to wherever the new home for those operations ends up.
Although it changes the API of an extension point, that feature (S3
Delegation Tokens) has not shipped; backwards compatibility is not a
problem except for anyone who has implemented DT support against trunk.
To those developers: sorry.
Change-Id: I770f58b49ff7634a34875ba37b7d51c94d7c21da
Contributed by Amir Shenavandeh.
This avoids overwrite consistency issues with S3 and other stores -though
given S3's copy operation is O(data), you are still best of using -direct
when distcp-ing to it.
Change-Id: I8dc9f048ad0cc57ff01543b849da1ce4eaadf8c3
Contributed by Jeetesh Mangwani.
This add the ability to track the end-to-end performance of ADLS Gen 2 REST APIs by measuring latency in the Hadoop ABFS driver.
The latency information is sent back to the ADLS Gen 2 REST API endpoints in the subsequent requests.
Contributed by Steve Loughran.
This downgrade the checks for leftover __magic entries from fail to warn now the parallel
test runs make speculation more likely.
Change-Id: Ia4df2e90f82a06dbae69f3fdaadcbb0e0d713b38
Contributed by Steve Loughran.
This FileSystem instantiation so if an IOException or RuntimeException is
raised in the invocation of FileSystem.initialize() then a best-effort
attempt is made to close the FS instance; exceptions raised that there
are swallowed.
The S3AFileSystem is also modified to do its own cleanup if an
IOException is raised during its initialize() process, it being the
FS we know has the "potential" to leak threads, especially in
extension points (e.g AWS Authenticators) which spawn threads.
Change-Id: Ib84073a606c9d53bf53cbfca4629876a03894f04
Contributed by Steve Loughran.
Includes HADOOP-16651. S3 getBucketLocation() can return "US" for us-east.
Change-Id: Ifc0dca76e51495ed1a8fc0f077b86bf125deff40
Contributed by Bilahari T H.
This also addresses HADOOP-16498: AzureADAuthenticator cannot authenticate
in China.
Change-Id: I2441dd48b50b59b912b0242f7f5a4418cf94a87c
Contributed by Steve Loughran.
This addresses two scale issues which has surfaced in large scale benchmarks
of the S3A Committers.
* Thread pools are not cleaned up.
This now happens, with tests.
* OOM on job commit for jobs with many thousands of tasks,
each generating tens of (very large) files.
Instead of loading all pending commits into memory as a single list, the list
of files to load is the sole list which is passed around; .pendingset files are
loaded and processed in isolation -and reloaded if necessary for any
abort/rollback operation.
The parallel commit/abort/revert operations now work at the .pendingset level,
rather than that of individual pending commit files. The existing parallelized
Tasks API is still used to commit those files, but with a null thread pool, so
as to serialize the operations.
Change-Id: I5c8240cd31800eaa83d112358770ca0eb2bca797
Contributed by Steve Loughran.
Replaces the committer-specific terasort and MR test jobs with parameterization
of the (now single tests) and use of file:// over hdfs:// as the cluster FS.
The parameterization ensures that only one of the specific committer tests
run at a time -overloads of the test machines are less likely, and so the
suites can be pulled back into the parallel phase.
There's also more detailed validation of the stage outputs of the terasorting;
if one test fails the rest are all skipped. This and the fact that job
output is stored under target/yarn-${timestamp} means failures should
be more debuggable.
Change-Id: Iefa370ba73c6419496e6e69dd6673d00f37ff095
Contributed by Steve Loughran.
Includes
-S3A glob scans don't bother trying to resolve symlinks
-stack traces don't get lost in getFileStatuses() when exceptions are wrapped
-debug level logging of what is up in Globber
-Contains HADOOP-13373. Add S3A implementation of FSMainOperationsBaseTest.
-ITestRestrictedReadAccess tests incomplete read access to files.
This adds a builder API for constructing globbers which other stores can use
so that they too can skip symlink resolution when not needed.
Change-Id: I23bcdb2783d6bd77cf168fdc165b1b4b334d91c7
Contributed by Steve Loughran.
This complements the StreamCapabilities Interface by allowing applications to probe for a specific path on a specific instance of a FileSystem client
to offer a specific capability.
This is intended to allow applications to determine
* Whether a method is implemented before calling it and dealing with UnsupportedOperationException.
* Whether a specific feature is believed to be available in the remote store.
As well as a common set of capabilities defined in CommonPathCapabilities,
file systems are free to add their own capabilities, prefixed with
fs. + schema + .
The plan is to identify and document more capabilities -and for file systems which add new features, for a declaration of the availability of the feature to always be available.
Note
* The remote store is not expected to be checked for the feature;
It is more a check of client API and the client's configuration/knowledge
of the state of the remote system.
* Permissions are not checked.
Change-Id: I80bfebe94f4a8bdad8f3ac055495735b824968f5
This uses the length of the file known at the start of the copy to determine the amount of data to copy.
* If a file is appended to during the copy, the original bytes are copied.
* If a file is truncated during a copy, or the attempt to read the data fails with a truncated stream,
distcp will now fail. Until now these failures were not detected.
Contributed by Mukund Thakur.
Change-Id: I576a49d951fa48d37a45a7e4c82c47488aa8e884
Contributed by Sahil Takiar.
This moves the SSLSocketFactoryEx class from hadoop-azure into hadoop-common
as the DelegatingSSLSocketFactory and binds the S3A connector to it so that
it can avoid using those HTTPS algorithms which are underperformant on Java 8.
Change-Id: Ie9e6ac24deac1aa05e136e08899620efa7d22abd
Contributed by Steve Loughran.
This patch avoids issuing any HEAD path request when creating a file with overwrite=true,
so 404s will not end up in the S3 load balancers unless someone calls getFileStatus/exists/isFile
in their own code.
The Hadoop FsShell CommandWithDestination class is modified to not register uncreated files
for deleteOnExit(), because that calls exists() and so can place the 404 in the cache, even
after S3A is patched to not do it itself.
Because S3Guard knows when a file should be present, it adds a special FileNotFound retry policy
independently configurable from other retry policies; it is also exponential, but with
different parameters. This is because every HEAD request will refresh any 404 cached in
the S3 Load Balancers. It's not enough to retry: we have to have a suitable gap between
attempts to (hopefully) ensure any cached entry wil be gone.
The options and values are:
fs.s3a.s3guard.consistency.retry.interval: 2s
fs.s3a.s3guard.consistency.retry.limit: 7
The S3A copy() method used during rename() raises a RemoteFileChangedException which is not caught
so not downgraded to false. Thus: when a rename is unrecoverable, this fact is propagated.
Copy operations without S3Guard lack the confidence that the file exists, so don't retry the same way:
it will fail fast with a different error message. However, because create(path, overwrite=false) no
longer does HEAD path, we can at least be confident that S3A itself is not creating those cached
404 markers.
Change-Id: Ia7807faad8b9a8546836cb19f816cccf17cca26d
Contributed by Steve Loughran.
This overlaps the scanning for directory entries with batched calls to S3 DELETE and updates of the S3Guard tables.
It also uses S3Guard to list the files to delete, so find newly created files even when S3 listings are not use consistent.
For path which the client considers S3Guard to be authoritative, we also do a recursive LIST of the store and delete files; this is to find unindexed files and do guarantee that the delete(path, true) call really does delete everything underneath.
Change-Id: Ice2f6e940c506e0b3a78fa534a99721b1698708e