Scalastyle standard configuration
true
ARROW, EQUALS, ELSE, TRY, CATCH, FINALLY, LARROW, RARROW
ARROW, EQUALS, COMMA, COLON, IF, ELSE, DO, WHILE, FOR, MATCH, TRY, CATCH, FINALLY, LARROW, RARROW
^AnyFunSuite[A-Za-z]*$
Tests must extend org.apache.spark.SparkFunSuite instead.
^println$
spark(.sqlContext)?.sparkContext.hadoopConfiguration
@VisibleForTesting
Runtime\.getRuntime\.addShutdownHook
mutable\.SynchronizedBuffer
Class\.forName
Await\.result
Await\.ready
(\.toUpperCase|\.toLowerCase)(?!(\(|\(Locale.ROOT\)))
throw new \w+Error\(
JavaConversions
Instead of importing implicits in scala.collection.JavaConversions._, import
scala.collection.JavaConverters._ and use .asScala / .asJava methods
org\.apache\.commons\.lang\.
Use Commons Lang 3 classes (package org.apache.commons.lang3.*) instead
of Commons Lang 2 (package org.apache.commons.lang.*)
scala\.concurrent\.ExecutionContext\.Implicits\.global
User queries can use global thread pool, causing starvation and eventual OOM.
Thus, Spark-internal APIs should not use this thread pool
FileSystem.get\([a-zA-Z_$][a-zA-Z_$0-9]*\)
extractOpt
Use jsonOption(x).map(.extract[T]) instead of .extractOpt[T], as the latter
is slower.
java,scala,3rdParty,spark
javax?\..*
scala\..*
(?!org\.apache\.spark\.).*
org\.apache\.spark\..*
COMMA
\)\{
(?m)^(\s*)/[*][*].*$(\r|)\n^\1 [*]
Use Javadoc style indentation for multiline comments
case[^\n>]*=>\s*\{
Omit braces in case clauses.
new (java\.lang\.)?(Byte|Integer|Long|Short)\(
Use static factory 'valueOf' or 'parseXXX' instead of the deprecated constructors.
Please use Apache Log4j 2 instead.
800>
30
10
50
-1,0,1,2,3
Objects.toStringHelper
Avoid using Object.toStringHelper. Use ToStringBuilder instead.
Files\.createTempDir\(
Avoid using com.google.common.io.Files.createTempDir due to CVE-2020-8908.
Use org.apache.spark.util.Utils.createTempDir instead.
new Path\(new URI\(