
使用Python使用GraphQL的快速教程。
GraphQL(1), Python(59)今年早些时候度过了一些时间experimenting with gRPCfor defining and integrating server/client pairs, this weekend I wanted to spend a bit of time doing a similar experiment withGraphQL.。
I couldn’t find any particularly complete tutorials for doing this in Python, so I’ve written up what I hope is a useful collection of notes for someone looking to try out GraphQL in Python.
Full tutorial code is on Github。
目标
At Digg, we had a simple service which would crawl a given URL and return its title, a summary and any worthy images. Early Digg relied heavily on unreliable scraping heuristics to extract these characteristics, but most websites these days have enough social media metadata to greatly simplify the process.
我们正在建造的项目是重新创建爬行服务,我们将使用它使用萃取library I wrote some years back (which was on my mind because I recently updated it to be Python3 compatible).
完成后,我们将提交客户端请求:
{网站(URL:“//www.klytx.com/migrations”){title image}}
服务器的响应将是:
{“数据”:{“网站”:{“标题”:“迁移:唯一可扩展的修复到技术债务。”,“图像”:“https://www.klytx.com/static/blog/2018/migrations-ho.png“,}}}
每个网站will also include adescription
场地。
Setup
假设您在本地可用的Python3,让我们先创建一个virtual environment对于我们的依赖项,然后安装它们:
MKDIR教程CD教程Python3-M venv Env。./env/bin/activate pip安装提取石墨烯瓶-GraphQL请求
如果您想要在本教程中使用的确切版本,您可以在此处找到它们要求.txt.在github上。
爬行和提取
在跳入GraphQL之前,让我们快速编写从网站爬行和提取数据的代码,因为这是一点展示。
导入Graphene导入提取导入请求def提取物(URL):html =请求。重点(URL)。提取=提取。提取(HTML,SOURT_URL = URL)返回提取
Which we’d use as follows:
>>>提取物('//www.klytx.com/mirrations')<提取:(标题:'迁移:唯一的可扩展修复到技术债务。',4更多),(URL:'https://终于酸奶。COM /迁移/',1更多),(图片:'//www.klytx.com/static/blog/2018/mirations-he',1更多),(描述:'迁移既是必不可少的,令人沮丧),5更多),(Feed:'//www.klytx.com/feeds/')>
每个Extracted
object mamkes five pieces of data available: title, url, image, description and feed.
Full code in extraction_tutorial/schema.py
架构
在每个GraphQL API的基础上是一个GraphQL.schema,它描述了暴露API的对象,字段和类型。我们用石墨烯.将我们的模式描述为Python对象。
撰写架构来描述提取的网站相当直接,例如:
import graphene class Website(graphene.ObjectType): url = graphene.String(required=True) title = graphene.String() description = graphene.String() image = graphene.String()
在这里我们只是使用Graphene.String.
to describe our fields' types but each field could be another object we’ve described ora number of other enums, scalars, lists and such。
有点意想不到的是,我们也必须编写一个描述的架构问:uery我们会检索这些对象:
import graphene class Query(graphene.ObjectType): website = graphene.Field(Website, url=graphene.String()) def resolve_website(self, info, url): extracted = extract(url) return Website(url=url, title=extracted.title, description=extracted.description, image=extracted.image)
在这种情况下网站
是我们支持查询的对象类型,URL.
is a parameter that’ll be passed along to the resolution function, and thenresolve_website
is called by each request to a网站
object.
请注意,这里发生了相当数量的魔法,名称必须与此匹配。我的大多数问题都在跨越字段中的拼写错误,导致它们不正确匹配。还要注意extract
是我们在上一节中写道的功能。
最后一步是创建一个graphene.Schema
您将传递给服务器以描述您创建的新API的实例:
Schema = Graphene.Schema(查询=查询)
完成了,您已创建
Full code in extraction_tutorial/schema.py
服务器
现在我们编写了我们的架构,我们可以开始使用它通过HTTP使用烧瓶and烧瓶-graphql:
从Flask_GraphQL导入GraphQLView从Extraction_tutorial.schema导入模式app = flask(__ name__)app.add_url_rule('/',view_func = graphqlview.as_view('graphql',schema = schema,graphiql = true))app.run()
请注意,除非您已下载示例代码,否则您的架构将具有不同的导入路径。如果您不想使用导入路径,请将架构和服务器放入单个文件中也很好。
现在,您可以通过运行服务器
python server.py
之后它将开始运行,可用localhost:5000
。
Client
虽然它们存在,但您不需要特殊的GraphQL客户端来对新API执行API请求,可以坚持使用我们使用的HTTP客户端要求in this example.
import requests q = """ { website(url: "//www.klytx.com/migrations") { title image description } } """ resp = requests.post("http://localhost:5000/", params={'query': q}) print(resp.text)
Running that script, the output would be:
{“数据”:{“网站”:{“标题”:“迁移:唯一可扩展的修复到技术债务。”,“图像”:“https://www.klytx.com/static/blog/2018/migrations-ho.png“,”描述“:”迁移都是必不可少的,令人沮丧......“}}}
您可以自定义内容问:
检索不同的字段,甚至使用类似的字段别名一次检索多个对象。
提取_tutorial / http_client.py中的完整代码
Extending objects
Potentially the most interesting and exciting part of GraphQL is how easy it is to extend your object without causing compatibility issues in your client. For example, let’s imagine we wanted to start returning pages' RSS feed as well through a newfeed
场地。
We can add it to网站
and update ourresolve_website
method to return thefeed
field as follows:
import graphene class Website(graphene.ObjectType): url = graphene.String(required=True) title = graphene.String() description = graphene.String() image = graphene.String() feed = graphene.String() class Query(graphene.ObjectType): website = graphene.Field(Website, url=graphene.String()) def resolve_website(self, info, url): extracted = extract(url) return Website(url=url, title=extracted.title, description=extracted.description, image=extracted.image, feed=extracted.feed)
If you wanted to retrieve this new field, you’d just update your query to also request it, in addition to the other fields liketitle
andimage
that you’re already retrieving.
内省
GraphQL最强大的方面之一是其服务器支持内省, which allow both humans and automated tools to understand the available objects and operations.
这是最好的例子是,如果您正在运行我们构建的示例,则可以导航到localhost:5000
和使用graphiql.直接测试您的新API。
这些功能不限于GraphiQL,you can also integrate with them using the same query interface you’d use to query your new API. A simple example would be we can ask about the available queries exposed by our sample service:
{__type(名称:“查询”){字段{name args {name}}}}}}
To which the server would reply:
{ “数据”:{ “__type”:{ “田”:[{ “Name”: “网站”, “ARGS”:[{ “Name”: “URL”}]}]}}}
There are a bunch of other内省查询可用,写入有点笨拙,但为刀具构建器公开了大量的电源。绝对值得玩游玩!
结束思想
Overall, I was quite impressed with how easy it was to work with GraphQL, and even more impressed with how easy it was to integrate against. This approach to describing objects was more intuitive to me than gRPC’s, with the later still being more akin to writing a protocol than describing an object.
At this point, if I was writing a product API, GraphQL would be the first tool I’d reach for, and if I was writing a piece of infrastructure, I’d still prefer gRPC, especially for its authentication and tight HTTP/2 integration (for e.g. bi-direcitonal streaming).
在某些时候挖掘到这里的许多其他问题:
- How do they fair in terms of data compression?
- 如果服务器压缩结果,则压缩甚至很重要吗?
- Does GraphQL have worse protocol compression but superior field compression since folks have to explicitly ask for what they need?
- How well do their field deprecation stories work in practicve? Both have some story around deprecation, neither seeming ideal, with GraphQL’s deprecation warnings seeming a bit superior, since you could imagine writing your client libraries to surface all deprecation warnings returned by the API with a log of some sort.
I’m sure there are a bunch more as well!