
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
一开始看到这个项目的时候我是糊涂的。因为我没理解这个工具的用途是什么。
仔细看了一下原理,原来是相当于一个转换器,通过restful api 操作数据库,并返回一些内容。
但即使我知道了这点后,我还是表示相当惊讶——读数据库不是很简单的吗,为什么要写一个转换器?
再深入了解下,我才发现这是我的盲区。因为我是前端后端都会,大部分情况,我不需要转换器,我也很多情况下拥有读取数据库的能力。对我来讲读数据库就是一件很简单的事情。为了这么简单的事情专门写个转换器太麻烦了。但,我不需要,不代表别人不需要。我猜APIJSON的主要用户是app开发者,和部分纯前端开发者(极少接触过后端)。他们可能比较需要一种“用客户端请求就能操作数据库”的能力。
作者这种发现需求的能力值得赞扬。
然后,我提一下我个人的一些建议,仅供参考。
1,因为我曾经做过复杂后端开发,所以我想说下,很多情况下,从数据库到输出json这个过程的中间,是需要逻辑处理的。直接输出数据表字段仅仅是其中一种比例不高的情况。举一些例子。对于电商情形,ta要多次计算订单数据,更新多个表,中间还可能涉及到事务,最后才会输出sjon 。如果中间过程出错了右该如何回滚,如何容错。
所以,我建议你可以多思考下有没有可能在这里做个扩展,比如说暴露一些API或者函数钩子之类的,允许别人添加自定义的逻辑处理。这块水很深,需要细想。
2,因为我自己做过web前端开发,所以,我觉得,一个前端人员如果非要提前使用json接口来调试,不想等待后端修改,我可能更多用mock数据。所以我个人猜想前端开发人员对APIJSON的使用需求不会很强烈。也因此我为什么在上面猜测APIJSON的使用群里应该更多是app开发者。因为app开发者才更需要这种“用客户端请求就能操作数据库”的能力。当然可能是我想错了。毕竟我可能不清楚其他用户群体的需求。
3,你的技术选型不好。我看到你的做法是每种语言都搞一个server端。这是个无底坑。因为你每次做个更新都要各种语言实现一遍。如果没人指出这个问题,我觉得你以后会持续把自己的精力浪费在不必要的事情上。我个人建议,你可以深入学习一种后端语言,最好是node或者go,因为这两者通过一些工具都可以把代码打包,打包后则适合在各种环境下运行。这方面我只是浅层建议,你去调研一下吧。
4,我留意到这个APIJSON项目,是因为我的项目showdoc https://github.com/star7th/showdoc 。我会先保持观察,调研下需求有没有必要集成进来,或者用我自己的方式重新实现一遍深度整合进showdoc