admin管理员组

文章数量:1128668

We have a "log file" of millions of SQL-queries (from Hive, Teradata and other SQL-services). We need to use Spark at a Hadoop cluster to analyse these queries... Seems to be natural to use the native SparkSQLparse (Catalyst module or its AstBuilder).

We need to check which tables are queried most frequently by SQL-queries of the log. Here a draft of the algorithm that extracts SQL FROM clause, that seems fromClauseContext in the abstract tree:

Question: is possible to implment this kind of algorithm (steps 2 and 3) using SparkSQLparse?

PS: if this is possible, please provide an outline of how to do it... For example starting with sqlParser.parsePlan("select * from myTable") (or another way?).

We have a "log file" of millions of SQL-queries (from Hive, Teradata and other SQL-services). We need to use Spark at a Hadoop cluster to analyse these queries... Seems to be natural to use the native SparkSQLparse (Catalyst module or its AstBuilder).

We need to check which tables are queried most frequently by SQL-queries of the log. Here a draft of the algorithm that extracts SQL FROM clause, that seems fromClauseContext in the abstract tree:

Question: is possible to implment this kind of algorithm (steps 2 and 3) using SparkSQLparse?

PS: if this is possible, please provide an outline of how to do it... For example starting with sqlParser.parsePlan("select * from myTable") (or another way?).

Share Improve this question edited Jan 8 at 12:34 Peter Krauss asked Jan 8 at 12:01 Peter KraussPeter Krauss 113 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

It is possible, but you should be aware that the exact solution is dependant on Spark internal APIs that can change regularly. Even exceptions change e.g. handling missing relations from 3.5.0 and 3.0 require different source trees, or actual parsing in 3.0 is different than 3.5.0.

That said, the actual Spark Expression and Plan tree API's are well thought out and easy to work with (as long as you are using Scala of course), with Spark 3.5.1:

val plan = new SparkSqlParser().parsePlan("select a.*, b.* from a_table a, b_table b where a.id = b.id")
val relations = plan.collect{
  case r: UnresolvedRelation => r
}
val aliases = plan.collect{
  case a@ SubqueryAlias(_, _: UnresolvedRelation) => a
}
println("relations")
relations.foreach(r => println(r.tableName))
println("alias'")
aliases.foreach(a => println(s"${a.alias} -> ${a.child}"))

will yield:

relations
a_table
b_table
alias'
a -> 'UnresolvedRelation [a_table], [], false

b -> 'UnresolvedRelation [b_table], [], false

本文标签: apache sparkHow to use SparkSQLparse in a simple FROM analysisStack Overflow