admin管理员组文章数量:1128668
We have a "log file" of millions of SQL-queries (from Hive, Teradata and other SQL-services). We need to use Spark at a Hadoop cluster to analyse these queries... Seems to be natural to use the native SparkSQLparse (Catalyst module or its AstBuilder).
We need to check which tables are queried most frequently by SQL-queries of the log. Here a draft of the algorithm that extracts SQL FROM
clause, that seems fromClauseContext in the abstract tree:
Question: is possible to implment this kind of algorithm (steps 2 and 3) using SparkSQLparse?
PS: if this is possible, please provide an outline of how to do it... For example starting with sqlParser.parsePlan("select * from myTable")
(or another way?).
We have a "log file" of millions of SQL-queries (from Hive, Teradata and other SQL-services). We need to use Spark at a Hadoop cluster to analyse these queries... Seems to be natural to use the native SparkSQLparse (Catalyst module or its AstBuilder).
We need to check which tables are queried most frequently by SQL-queries of the log. Here a draft of the algorithm that extracts SQL FROM
clause, that seems fromClauseContext in the abstract tree:
Question: is possible to implment this kind of algorithm (steps 2 and 3) using SparkSQLparse?
PS: if this is possible, please provide an outline of how to do it... For example starting with sqlParser.parsePlan("select * from myTable")
(or another way?).
1 Answer
Reset to default 0It is possible, but you should be aware that the exact solution is dependant on Spark internal APIs that can change regularly. Even exceptions change e.g. handling missing relations from 3.5.0 and 3.0 require different source trees, or actual parsing in 3.0 is different than 3.5.0.
That said, the actual Spark Expression and Plan tree API's are well thought out and easy to work with (as long as you are using Scala of course), with Spark 3.5.1:
val plan = new SparkSqlParser().parsePlan("select a.*, b.* from a_table a, b_table b where a.id = b.id")
val relations = plan.collect{
case r: UnresolvedRelation => r
}
val aliases = plan.collect{
case a@ SubqueryAlias(_, _: UnresolvedRelation) => a
}
println("relations")
relations.foreach(r => println(r.tableName))
println("alias'")
aliases.foreach(a => println(s"${a.alias} -> ${a.child}"))
will yield:
relations
a_table
b_table
alias'
a -> 'UnresolvedRelation [a_table], [], false
b -> 'UnresolvedRelation [b_table], [], false
本文标签: apache sparkHow to use SparkSQLparse in a simple FROM analysisStack Overflow
版权声明:本文标题:apache spark - How to use SparkSQLparse in a simple FROM analysis? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1736731454a1950018.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论