震惊SEO界!上千份谷歌搜索API文档遭泄露并被指言行不一

Linux Alan 1个月前 (05-29) 217次浏览 0个评论 扫描二维码

主要内容位于这篇文章中:

据Rand Fishkin(《SEO艺术》作者及前Moz创始人)称他是于5月5日收到邮件,声称获取了大量来自Google搜索部门的API文档。其中提到这些文档获得了来自一些Google前员工的证实,并且部分员工还分享其它有关Google搜索运算的信息。

其中有很多项与过去多年Google人员所做的公开声明是相抵触的,尤其是该公司反复否认的使用了用户点击信号、子域名独立进行排名、新网站的沙盒期、收集或考虑了域名的年限等等。

对此Rand Fishkin一开始自然是怀疑的。但这位要求匿名的消息发布者的说法并非空穴来风——例如:

  • 在早期,谷歌的搜索团队意识到,需要大量网络用户的完整点击流数据(即通过浏览器访问的每一个URL)来提升其搜索引擎结果的质量。
  • 一个名为“NavBoost”的系统(由搜索副总裁Pandu Nayak在其DOJ案件证词中提到)最初通过谷歌的工具栏页面排名收集数据,而对更多点击流数据的需求是2008年推出的Chrome浏览器创建的主要动机。
  • NavBoost利用给定关键词的搜索次数来识别热门搜索需求,搜索结果的点击次数(作者于2013年至2015年间进行了多次实验),以及长点击和短点击的影响(作者在2015年的视频中提出相关理论)。
  • 谷歌利用cookie历史记录、登录的Chrome数据和模式检测(泄露中称为“未压缩”点击与“压缩”点击)作为打击手动和自动垃圾点击的有效手段。
  • NavBoost还对用户意图进行评分。例如,某些注意力阈值和点击的视频或图像会触发该查询及NavBoost相关查询的视频或图像功能。
  • 谷歌会检查在主查询(称为“NavBoost查询”)期间和之后的搜索点击和参与度。例如,如果许多用户搜索“Rand Fishkin”,却未能找到SparkToro,并立即将查询更改为“SparkToro”并在搜索结果中点击SparkToro.com,那么SparkToro.com(以及提到“SparkToro”的网站)将在“Rand Fishkin”关键词的搜索结果中获得排名提升。
  • NavBoost的数据用于评估网站整体质量(匿名消息发布者推测这可能是谷歌和SEO称为“Panda”的算法)。这种评估可能会导致排名的提升或下降。
  • 其他次要因素,如对完全匹配非品牌搜索查询的域名(例如mens-luxury-watches.com或milwaukee-homes-for-sale.net)的惩罚、较新的“BabyPanda”评分和垃圾信号也在质量评估过程中被考虑。
  • NavBoost会对点击数据设置地理围栏,会考虑国家、州/省的不同,以及移动设备与桌面设备的使用情况。但如果谷歌缺乏某些地区或user-agent的数据,则可能会将通用处理应用到查询结果中。
  • 在新冠疫情期间,谷歌为可在与Covid相关的搜索结果中排名靠前的网站设置了白名单。
  • 同样,在大选期间,谷歌对应显示(或降级)选举相关信息的网站设置了白名单。

这些还中是冰山一角。

炸裂声明需要有确凿的证据。虽然其中一些与谷歌/司法部案件中披露的信息有所重叠(部分内容可以在2020年的这个帖子中看到),但很多都是新披露的的,显示其来自内部消息。

因此,在上周五也即5月24日(经历几轮email之后),我与这位匿名发布者进行了视频通话。

Rand与披露者视频通话的截屏

5/28大平洋时间10:00am更新:匿名发布者决定站出来。以下视频中披露了其身份, Erfan Azimi,SEO从业者、EA Eagle Digital的创始人。

在邮件和视频通话前,作者并未听过或见过Erfan。他要求不要披露其身份。我仅引用以下之名言:

雄鹰借助风暴来到达力所不能及之高空。

– Matshona Dhliwayo

在通话之后,作者确认了Erfan的工作经历、在营销界共同认识的朋友以及通过业内人士(含 Google 员工)确认其出席具体活动的声明,但并不能确定这些会议的详情及所讨论的具体内容。

在通话期间,Erfan展示了泄漏文件:一份包含14014个功能(API特性)的2500页的API文档,来源似为Google内部的“内容API仓库。”根据文档的提交历史,这份代码于2024年3月27日提交至GitHub,直到5月7日才删除。(注:因为在写本文之后Erfan才同意披露身份,所以下面还使用了匿名披露者)。

这份文档里没有展示搜索排名算法中具体元素的权重,也没有证实在排名系统中使用了哪些元素。但表明了Google采集数据的细节。以下是该文档的样例:

披露数据中有关“好”和“坏”点击的截屏,包含点击长度(即访客通过Google搜索结果面点击访问网站到返回搜索结果页之间的时间)

在带我看过一堆这种API模块之后,披露者表明了其动机(有关透明性、要求Google负起责任等)以及他们的期望:希望由我发布文章分享泄漏信息,并揭示其中一些有趣的点,还有驳斥“过往这些年” Google 人员所散布的“谎言”。

这一API泄漏文档是真的吗?我们能想信吗?

下面关键的一步是验证API内容仓库文档真实性。所以我联系一些前谷歌员工,分享了泄露的文档并征求他们的看法。三位前谷歌员工回复了我:一位表示不愿意查看或评论这些内容,另外两位匿名分享了以下看法:

  • “我在那工作时没有接触到这段代码。但这看起来确实的真的。”
  • “它具有谷歌内部API的所有特征。”
  • “这是一个基于Java的API,并且有人花了很多时间遵守谷歌内部的文档和命名标准。”
  • “我需要更多时间来确定,但这与我熟悉的内部文档相匹配。”
  • “在简要审查中,我没有看到任何迹象表明这不是真实的。”

接下来,我需要协助分析和解读命名约定和文档的更多技术方面。虽然我对API有一些了解,但已有20年没写代码,6年没从事专业的SEO工作了。所以,我联系了世界上最顶尖的技术SEO之一:iPullRank的创始人Mike King

在周五下午的40分钟电话会议中,Mike查阅了泄露的文档,并确认了我的怀疑:这些似乎确实是来自谷歌搜索部门的真实文档,包含了大量之前未确认的关于谷歌内部运作的信息。

要求一个人(更不用说他还是一个父亲、丈夫和企业家)在一个周末内审查2500份技术文档是不合理的。但是,这Mike依然尽力而为。

他已经整理了一份非常详细的谷歌API泄露的初步审查报告,我在下面还会不断地引述他的内容。同时,他也同意参加于2024年10月8日于华盛顿州西雅图举行的SparkTogether 2024大会,届时他将以完全透明的方式更详细地讲述这次泄露事件,以及利用接下来几个月的分析成果。

发布本文的资格和动机

在继续之前,有几点声明:我不再从事SEO领域的工作。我的SEO知识和经验限于6年多之前。我没有技术专长或对谷歌内部算法的了解,无法分析泄密API文档及确认其真实性(因此需要Mike的帮助以及前谷歌员工的意见)。

那么,为什么要发这篇文章呢?

因为在我与提供此信息的人交谈时,我发现他们是可信的、深思熟虑的,并且非常有见地。尽管我在交谈时非常怀疑,但我没有发现任何红线或恶意动机。这个人的唯一目标与我的目标非常一致:让谷歌对其公开声明和个人讲话与泄露文档中相冲突的内容负责,并为搜索营销领域带来更大的透明度。而且他们认为,尽管我已经离开SEO多年,我是公开分享这一信息最合适的人选。

这些目标是我近二十年来非常关心的。虽然我的职业生涯已有转变(我现在经营两家公司:制作受众研究软件的SparkToro和独立视频游戏开发公司Snackbar Studio),但我对搜索引擎优化世界的兴趣和联系依旧强烈。我深感有责任分享有关全球最大的搜索引擎运作的信息,尤其是谷歌希望对此保密。可异,我不确定还可以在哪里发布这种炸裂的新闻。

多年前,在Danny Sullivan离开新闻界成为谷歌的搜索联络员之前,他是我处理这种重大泄密事件的首选。他具有威信、履历、知识和经验,可以在公众舆论的法庭上公平地审查和展示这种主张。在过去的几年里,有很多次我希望能有Danny那种冷静、公正、对谷歌既严厉又公平的态度来处理这样的新闻——这些新闻可能会涉及到公司的声明(例如他对谷歌关于自然流量关键词数据的不可信隐私声明的精妙写作)。

无论谷歌付多少钱,都远远不够。

亲爱的读者,很抱歉你们只能听我讲述,而不是Danny。不过既然你们来到了这里,我就假设你们可能不太了解我的背景或资历,简要介绍一下。

好了,回到谷歌泄密事件。


Google API内容仓库是什么?

在阅读海量API文档时,首先的问题可能是:“这是什么?它是用来做什么的?它最初为什么存在?”

震惊SEO界!上千份谷歌搜索API文档遭泄露并被指言行不一

泄密似乎来自GitHub,最可信的解释与我的匿名消息来源在电话中告诉我的一致:这些文档是在无意中被短暂公开的(文档中的许多链接指向GitHub私有仓库谷歌公司网站上的内部页面,这些页面需要特定的谷歌登录凭证)。在2024年3月至5月之间,这些API文档可能在意外的公开时期被传播到了Hexdocs(该网站索引公共GitHub仓库)并被其他来源发现和传播(我确信其他人也有副本,尽管很奇怪到现在我还没发现任何公开讨论)。

根据我的前谷歌员工消息来源,这类文档几乎存在于每个谷歌团队中,解释各种API属性和模块,帮助项目成员熟悉可用的数据元素。这次泄密与公共GitHub仓库和谷歌云API文档中的其他内容相匹配,使用相同的符号风格、格式,包括过程/模块/特性名称和引用。

如果这听起来像一堆技术术语,可以将其视为谷歌搜索引擎团队成员的使用说明。就像图书馆的书籍目录,一种卡片目录,告诉需要知道的员工有哪些资源以及如何获取它们。

但是,图书馆是公开的,而谷歌搜索是世界上最神秘、最严密保护的黑匣子之一。在过去的25年,从未有过来自谷歌的搜索部门如此规模和详尽的泄密事件

我们能有多确定谷歌的搜索引擎使用了这些API文档中详细描述的内容呢?

这是个开放问题。谷歌可能已经弃用了一些接口,部分接口仅用于测试或内部项目,甚至从未使用过其中一些API功能。

然而,文档中有对弃用功能的引用,并且对其他功能有具体的注释,表明它们不应再被使用。这强烈暗示那些没有做标注的功能在2024年3月泄密时仍在使用。

我们也不能确定3月的泄密是否是该文档的最新版本。在API文档中我能找到的最近日期是2023年8月:

震惊SEO界!上千份谷歌搜索API文档遭泄露并被指言行不一

相关描述如下:

“网站的域名级显示名称,例如google.com的‘Google’。更多详情请参阅go/site-display-name。截至2023年8月,此字段将被弃用,取而代之的是info.[AlternativeTitlesResponse].site_display_name_response字段,该字段还包含具有附加信息的主机级站点显示名称。”

读者会推理出,这些文档截至去年夏天是最新的(还有其他对2023年及更早年份,甚至追溯到2005年的变更的引用),甚至更新到2024年3月披露的日期。

显然,谷歌搜索每年都发生重大变化,最近引入的备受诟病的AI概述并未在这次泄密中出现。哪些项在今天的谷歌排名系统中仍在使用我们只能猜测。这个文档包含了许多有趣的引用,许多对于非谷歌搜索工程师来说是全新的。

但是,我建议读者不要仅仅指向泄密中的某个API功能并说:“看!这证明了谷歌在排名中使用了XYZ。” 这还不能完全算作证据。虽然这比专利申请或谷歌员工的公开声明更有力,但仍不是保证。

话虽如此,自从谷歌高管去年在司法部审判中作证以来,这次泄密事件已经是最接近确凿证据的东西了。而且,说到那份证词,许多内容在文档泄密中得到了证实和扩展,正如Mike在他的帖子中详细描述的那样。

 

 

 

 

 

 

 

What can we learn from the Data Warehouse Leak?

I expect that interesting and marketing-applicable insights will be mined from this massive file set for years to come. It’s simply too big and too dense to think that a weekend of browsing could unearth a comprehensive set of takeaways, or even come close.

However, I will share five of the most interesting, early discoveries in my perusal, some that shed new light on things Google has long been assumed to be doing, and others that suggest the company’s public statements (especially those on what they “collect”) have been erroneous. Because doing so would be tedious and could be perceived as personal grievances (given Google’s historic attacks on my work), I won’t bother showing side-by-sides of what Googlers said vs. what this document insinuates. Besides, Mike did a great job of that in his post.

Instead, I’ll focus on interesting and/or useful takeaways, and my conclusions from the whole of the modules I’ve been able to review, Mike’s piece on the leak, and how this combines with other things we know to be true of Google.

#1: Navboost and the use of clicks, CTR, long vs. short clicks, and user data

A handful of modules in the documentation make reference to features like “goodClicks,” “badClicks,” “lastLongestClicks,” impressions, squashed, unsquashed, and unicorn clicks. These are tied to Navboost and Glue, two words that may be familiar to folks who reviewed Google’s DOJ testimony. Here’s a relevant excerpt from DOJ attorney Kenneth Dintzer’s cross-examination of Pandu Nayak, VP of Search on the Search Quality team:

Q. So remind me, is navboost all the way back to 2005?
A. It’s somewhere in that range. It might even be before that.

Q. And it’s been updated. It’s not the same old navboost that it was back then?
A. No.

Q. And another one is glue, right?
A. Glue is just another name for navboost that includes all of the other features on the page.

Q. Right. I was going to get there later, but we can do that now. Navboost does web results, just like we discussed, right?
A. Yes.

Q. And glue does everything else that’s on the page that’s not web results, right?
A. That is correct.

Q. Together they help find the stuff and rank the stuff that ultimately shows up on our SERP?
A. That is true. They’re both signals into that, yes.

A savvy reader of these API documents would find they support Mr. Nayak’s testimony (and align with Google’s patent on site quality):

Google appears to have ways to filter out clicks they don’t want to count in their ranking systems, and include ones they do. They also seem to measure length of clicks (i.e. pogo-sticking – when a searcher clicks a result and then quickly clicks the back button, unsatisfied by the answer they found) and impressions.

Plenty has already been written about Google’s use of click data, so I won’t belabor the point. What matters is that Google has named and described features for that measurement, adding even more evidence to the pile.

#2: Use of Chrome browser clickstreams to power Google Search

My anonymous source claimed that way back in 2005, Google wanted the full clickstream of billions of Internet users, and with Chrome, they’ve now got it. The API documents suggest Google calculates several types of metrics that can be called using Chrome views related to both individual pages and entire domains.

This document, describing the features around how Google creates Sitelinks, is particularly interesting. It showcases a call named topUrl, which is “A list of top urls with highest two_level_score, i.e., chrome_trans_clicks.” My read is that Google likely uses the number of clicks on pages in Chrome browsers and uses that to determine the most popular/important URLs on a site, which go into the calculation of which to include in the sitelinks feature.

震惊SEO界!上千份谷歌搜索API文档遭泄露并被指言行不一

E.G. In the above screenshot from Google’s results, pages like “Pricing,” the “Blog,” and the “Login” pages are our most-visited, and Google knows this through their tracking of billions of Chrome users’ clickstreams.

#3: Whitelists in Travel, Covid, and Politics

A module on “Good Quality Travel Sites” would lead reasonable readers to conclude that a whitelist exists for Google in the travel sector (unclear if this is exclusively for Google’s “Travel” search tab, or web search more broadly). References in several places to flags for “isCovidLocalAuthority” and “isElectionAuthority” further suggests that Google is whitelisting particular domains that are appropriate to show for highly controversial of potentially problematic queries.

For example, following the 2020 US Presidential election, one candidate claimed (without evidence) that the election had been stolen, and encouraged their followers to storm the Capital and take potentially violent action against lawmakers, i.e. commit an insurrection.

Google would almost certainly be one of the first places people turned to for information about this event, and if their search engine returned propaganda websites that inaccurately portrayed the election evidence, that could directly lead to more contention, violence, or even the end of US democracy. Those of us who want free and fair elections to continue should be very grateful Google’s engineers are employing whitelists in this case.

#4: Employing Quality Rater Feedback

Google has long had a quality rating platform called EWOK (Cyrus Shepard, a notable leader in the SEO space, spent several years contributing to this and wrote about it here). We now have evidence that some elements from the quality raters are used in the search systems.

How influential these rater-based signals are, and what precisely they’re used for is unclear to me in an initial read, but I suspect some thoughtful SEO detectives will dig into the leak, learn, and publish more about it. What I find fascinating is that scores and data generated by EWOK’s quality raters may be directly involved in Google’s search system, rather than simply a training set for experiments. Of course, it’s possible these are “just for testing,” but as you browse through the leaked documents, you’ll find that when that’s true, it’s specifically called out in the notes and module details.

This one calls out a “per document relevance rating” sourced from evaluations done via EWOK. There’s no detailed notation, but it’s not much of a logic-leap to imagine how important those human evaluations of websites really are.

震惊SEO界!上千份谷歌搜索API文档遭泄露并被指言行不一

This one calls out “Human Ratings (e.g. ratings from EWOK)” and notes that they’re “typically only populated in the evaluation pipelines,” which suggests they may be primarily training data in this module (I’d argue that’s still a hugely important role, and marketers shouldn’t dismiss how important it is that quality raters perceive and rate their websites well).

#5: Google Uses Click Data to Determine How to Weight Links in Rankings

This one’s fascinating, and comes directly from the anonymous source who first shared the leak. In their words: “Google has three buckets/tiers for classifying their link indexes (low, medium, high quality). Click data is used to determine which link graph index tier a document belongs to. See SourceType here, and TotalClicks here.” In summary:

  • If Forbes.com/Cats/ has no clicks it goes into the low-quality index and the link is ignored
  • If Forbes.com/Dogs/ has a high volume of clicks from verifiable devices (all the Chrome-related data discussed previously), it goes into the high-quality index and the link passes ranking signals

Once the link becomes “trusted” because it belongs to a higher tier index, it can flow PageRank and anchors, or be filtered/demoted by link spam systems. Links from the low-quality link index won’t hurt a site’s ranking; they are merely ignored.


Big Picture Takeaways for Marketers who Care About Organic Search Traffic

If you care strategically about the value of organic search traffic, but don’t have much use for the technical details of how Google works, this section’s for you. It’s my attempt to sum up much of Google’s evolution from the period this leak covers: 2005 – 2023, and I won’t limit myself exclusively to confirmed elements of the leak.

  1. Brand matters more than anything else
    Google has numerous ways to identify entities, sort, rank, filter, and employ them. Entities include brands (brand names, their official websites, associated social accounts, etc.), and as we’ve seen in our clickstream research with Datos, they’ve been on an inexorable path toward exclusively ranking and sending traffic to big, powerful brands that dominate the web > small, independent sites and businesses.If there was one universal piece of advice I had for marketers seeking to broadly improve their organic search rankings and traffic, it would be: “Build a notable, popular, well-recognized brand in your space, outside of Google search.”
  2. Experience, expertise, authoritativeness, and trustworthiness (“E-E-A-T”) might not matter as directly as some SEOs think.
    The only mention of topical expertise in the leak we’ve found so far is a brief notation about Google Maps review contributions. The other aspects of E-E-A-T are either buried, indirect, labeled in hard-to-identify ways, or, more likely (in my opinion) correlated with things Google uses and cares about, but not specific elements of the ranking systems.As Mike noted in his article, there is documentation in the leak suggesting Google can identify authors and treats them as entities in the system. Building up one’s influence as an author online may indeed lead to ranking benefits in Google. But what exactly in the ranking systems makes up “E-E-A-T” and how powerful those elements are is an open question. I’m a bit worried that E-E-A-T is 80% propaganda, 20% substance. There are plenty of powerful brands that rank remarkably well in Google and have very little experience, expertise, authoritativeness, or trustworthiness, as HouseFresh’s recent, viral article details in depth.
  3. Content and links are secondary when user intention around navigation (and the patterns that intent creates) are present.
    Let’s say, for example, that many people in the Seattle area search for “Lehman Brothers” and scroll to page 2, 3, or 4 of the search results until they find the theatre listing for the Lehman Brother stage production, then click that result. Fairly quickly, Google will learn that’s what searchers for those words in that area want.Even if the Wikipedia article about Lehman Brothers’ role in the financial crisis of 2008 were to invest heavily in link building and content optimization, it’s unlikely they could outrank the user-intent signals (calculated from queries and clicks) of Seattle’s theatre-goers.Extending this example to the broader web and search as a whole, if you can create demand for your website among enough likely searchers in the regions you’re targeting, you may be able to end-around the need for classic on-and-off-page SEO signals like links, anchor text, optimized content, and the like. The power of Navboost and the intent of users is likely the most powerful ranking factor in Google’s systems. As Google VP Alexander Grushetsky put it in a 2019 email to other Google execs (including Danny Sullivan and Pandu Nayak):“We already know, one signal could be more powerful than the whole big system on a given metric. For example, I’m pretty sure that NavBoost alone was / is more positive on clicks (and likely even on precision / utility metrics) by itself than the rest of ranking (BTW, engineers outside of Navboost team used to be also not happy about the power of Navboost, and the fact it was “stealing wins”)“Those seeking even more confirmation could review Google engineer Paul Haahr’s detailed resume, which states:“I’m the manager for logs-based ranking projects. The team’s efforts are currently split among four areas: 1) Navboost. This is already one of Google’s strongest ranking signals. Current work is on automation in building new navboost data;”
  4. Classic ranking factors: PageRank, anchors (topical PageRank based on the anchor text of the link), and text-matching have been waning in importance for years. But Page Titles are still quite important.
    This is a finding from Mike’s excellent analysis that I’d be foolish not to call out here. PageRank still appears to have a place in search indexing and rankings, but it’s almost certainly evolved from the original 1998 paper. The document leak insinuates multiple versions of PageRank (rawPagerank, a deprecated PageRank referencing “nearest seeds,” firstCoveragePageRank from when the document was first served, etc.) have been created and discarded over the years. And anchor text links, while present in the leak, don’t seem to be as crucial or omnipresent as I’d have expected from my earlier years in SEO.
  5. For most small and medium businesses and newer creators/publishers, SEO is likely to show poor returns until you’ve established credibility, navigational demand, and a strong reputation among a sizable audience.
    SEO is a big brand, popular domain’s game. As an entrepreneur, I’m not ignoring SEO, but I strongly expect that for the years ahead, until/unless SparkToro becomes a much larger, more popular, more searched-for and clicked-on brand in its industry, this website will continue to be outranked, even for its original content, by aggregators and publishers who’ve existed for 10+ years.This is almost certainly true for other creators, publishers, and SMBs. The content you create is unlikely to perform well in Google if competition from big, popular websites with well-known brands exists. Google no longer rewards scrappy, clever, SEO-savvy operators who know all the right tricks. They reward established brands, search-measurable forms of popularity, and established domains that searchers already know and click. From 1998 – 2018 (or so), one could reasonable start a powerful marketing flywheel with SEO for Google. In 2024, I don’t think that’s realistic, at least, not on the English-language web in competitive sectors.

Next Steps for the Search Industry

I’m excited to see how practitioners with more recent experience and deeper technical knowledge go about analyzing this leak. I encourage anyone curious to dig into the documentation, attempt to connect it to other public documents, statements, testimony, and ranking experiments, then publish their findings.

Historically, some of the search industry’s loudest voices and most prolific publishers have been happy to uncritically repeat Google’s public statements. They write headlines like “Google says XYZ is true,” rather than “Google Claims XYZ; Evidence Suggests Otherwise.”

震惊SEO界!上千份谷歌搜索API文档遭泄露并被指言行不一
The SEO industry doesn’t benefit from these kinds of headlines

Please, do better. If this leak and the DOJ trial can create just one change, I hope this is it.

When those new to the field read Search Engine Roundtable, Search Engine Land, SE Journal, and the many agency blogs and websites that cover the SEO field’s news, they don’t necessarily know how seriously to take Google’s statements. Journalists and authors should not presume that readers are savvy enough to know that dozens or hundreds of past public comments by Google’s official representatives were later proven wrong.

This obligation isn’t just about helping the search industry—it’s about helping the whole world. Google is one of the most powerful, influential forces for the spread of information and commerce on this planet. Only recently have they been held to some account by governments and reporters. The work of journalists and writers in the search marketing field carries weight in the courts of public opinion, in the halls of elected officials, and in the hearts of Google employees, all of whom have the power to change things for the better or ignore them at our collective peril.


Thank you to Mike King for his invaluable help on this document leak story, to Amanda Natividad for editing help, and to the anonymous source who shared this leak with me. I expect that updates to this piece may arrive over the next few days and weeks as it reaches more eyeballs. If you have findings that support or contradict statements I’ve made here, please feel free to share them in the comments below.

 

 

 

喜欢 (0)
[]
分享 (0)
发表我的评论
取消评论

表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址